Test Report: Docker_Windows 19690

                    
                      f8db61c9b74e1fc8d4208c01add19855c5953b45:2024-09-23:36339
                    
                

Test fail (4/339)

Order failed test Duration
33 TestAddons/parallel/Registry 77.58
55 TestErrorSpam/setup 64.51
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 5.18
330 TestStartStop/group/old-k8s-version/serial/SecondStart 417.12
x
+
TestAddons/parallel/Registry (77.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 9.016ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-kw8kk" [1add3bf4-bfb5-4032-8085-4db4e3c3010d] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.0066292s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7l6xx" [6814b759-8ade-4cc6-b7f6-c4c91d60c390] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0118487s
addons_test.go:338: (dbg) Run:  kubectl --context addons-827700 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-827700 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-827700 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.257913s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-827700 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:353: Unable to complete rest of the test due to connectivity assumptions
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-827700
helpers_test.go:235: (dbg) docker inspect addons-827700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "120e6755226d4772e908b8dd48e556b6296cdac1988a0ec6ba93e875f7c83666",
	        "Created": "2024-09-23T11:09:25.65537353Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 777,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T11:09:26.000396455Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:d94335c0cd164ddebb3c5158e317bcf6d2e08dc08f448d25251f425acb842829",
	        "ResolvConfPath": "/var/lib/docker/containers/120e6755226d4772e908b8dd48e556b6296cdac1988a0ec6ba93e875f7c83666/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/120e6755226d4772e908b8dd48e556b6296cdac1988a0ec6ba93e875f7c83666/hostname",
	        "HostsPath": "/var/lib/docker/containers/120e6755226d4772e908b8dd48e556b6296cdac1988a0ec6ba93e875f7c83666/hosts",
	        "LogPath": "/var/lib/docker/containers/120e6755226d4772e908b8dd48e556b6296cdac1988a0ec6ba93e875f7c83666/120e6755226d4772e908b8dd48e556b6296cdac1988a0ec6ba93e875f7c83666-json.log",
	        "Name": "/addons-827700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-827700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-827700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/35710471de887a59ba3064920f24214b83d0dcd7fad28ee9ed93f4a94c02a342-init/diff:/var/lib/docker/overlay2/c7287d3444125b9a8090b921db98cb6ed8be2d7a048d39cf2a791cb2793d7251/diff",
	                "MergedDir": "/var/lib/docker/overlay2/35710471de887a59ba3064920f24214b83d0dcd7fad28ee9ed93f4a94c02a342/merged",
	                "UpperDir": "/var/lib/docker/overlay2/35710471de887a59ba3064920f24214b83d0dcd7fad28ee9ed93f4a94c02a342/diff",
	                "WorkDir": "/var/lib/docker/overlay2/35710471de887a59ba3064920f24214b83d0dcd7fad28ee9ed93f4a94c02a342/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-827700",
	                "Source": "/var/lib/docker/volumes/addons-827700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-827700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-827700",
	                "name.minikube.sigs.k8s.io": "addons-827700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7861349839d9b55d2de454bccb624c2f7275c6ce94501aa3b807355a4c76ac15",
	            "SandboxKey": "/var/run/docker/netns/7861349839d9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53187"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53188"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53189"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53190"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-827700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0c5a53f4c051a89d367fbf3b58c79741d4a46c44b2450d86709af4ab84b38bee",
	                    "EndpointID": "667e0fc9a267797182f5390b1670a8265f797ae90f20f1d4392bea6a484c5dc3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-827700",
	                        "120e6755226d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-827700 -n addons-827700
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-827700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-827700 logs -n 25: (2.3262284s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-005800   | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:07 UTC |                     |
	|         | -p download-only-005800                                                                     |                        |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                        |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |                   |         |                     |                     |
	|         | --driver=docker                                                                             |                        |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:07 UTC | 23 Sep 24 11:07 UTC |
	| delete  | -p download-only-005800                                                                     | download-only-005800   | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:07 UTC | 23 Sep 24 11:07 UTC |
	| delete  | -p download-only-264900                                                                     | download-only-264900   | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:07 UTC | 23 Sep 24 11:07 UTC |
	| delete  | -p download-only-005800                                                                     | download-only-005800   | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:07 UTC | 23 Sep 24 11:07 UTC |
	| start   | --download-only -p                                                                          | download-docker-959000 | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:07 UTC |                     |
	|         | download-docker-959000                                                                      |                        |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |                   |         |                     |                     |
	|         | --driver=docker                                                                             |                        |                   |         |                     |                     |
	| delete  | -p download-docker-959000                                                                   | download-docker-959000 | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:07 UTC | 23 Sep 24 11:07 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-833500   | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:07 UTC |                     |
	|         | binary-mirror-833500                                                                        |                        |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |                   |         |                     |                     |
	|         | http://127.0.0.1:53131                                                                      |                        |                   |         |                     |                     |
	|         | --driver=docker                                                                             |                        |                   |         |                     |                     |
	| delete  | -p binary-mirror-833500                                                                     | binary-mirror-833500   | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:07 UTC | 23 Sep 24 11:07 UTC |
	| addons  | enable dashboard -p                                                                         | addons-827700          | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:07 UTC |                     |
	|         | addons-827700                                                                               |                        |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-827700          | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:07 UTC |                     |
	|         | addons-827700                                                                               |                        |                   |         |                     |                     |
	| start   | -p addons-827700 --wait=true                                                                | addons-827700          | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:07 UTC | 23 Sep 24 11:16 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                        |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |                   |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |                   |         |                     |                     |
	|         | --driver=docker --addons=ingress                                                            |                        |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |                   |         |                     |                     |
	| addons  | addons-827700 addons disable                                                                | addons-827700          | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:17 UTC | 23 Sep 24 11:17 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |                   |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-827700          | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:25 UTC | 23 Sep 24 11:25 UTC |
	|         | -p addons-827700                                                                            |                        |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |                   |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-827700          | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:25 UTC | 23 Sep 24 11:25 UTC |
	|         | -p addons-827700                                                                            |                        |                   |         |                     |                     |
	| addons  | addons-827700 addons disable                                                                | addons-827700          | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:25 UTC | 23 Sep 24 11:25 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |                   |         |                     |                     |
	| addons  | addons-827700 addons                                                                        | addons-827700          | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:25 UTC | 23 Sep 24 11:25 UTC |
	|         | disable metrics-server                                                                      |                        |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |                   |         |                     |                     |
	| addons  | addons-827700 addons disable                                                                | addons-827700          | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:25 UTC | 23 Sep 24 11:25 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |                   |         |                     |                     |
	|         | -v=1                                                                                        |                        |                   |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-827700          | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:25 UTC | 23 Sep 24 11:25 UTC |
	|         | addons-827700                                                                               |                        |                   |         |                     |                     |
	| ssh     | addons-827700 ssh curl -s                                                                   | addons-827700          | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:26 UTC | 23 Sep 24 11:26 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |                   |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |                   |         |                     |                     |
	| ssh     | addons-827700 ssh cat                                                                       | addons-827700          | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:26 UTC | 23 Sep 24 11:26 UTC |
	|         | /opt/local-path-provisioner/pvc-e0f6395d-66c4-4c42-9f5c-1478cb042762_default_test-pvc/file1 |                        |                   |         |                     |                     |
	| addons  | addons-827700 addons disable                                                                | addons-827700          | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:26 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |                   |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-827700          | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:26 UTC | 23 Sep 24 11:26 UTC |
	|         | addons-827700                                                                               |                        |                   |         |                     |                     |
	| addons  | addons-827700 addons                                                                        | addons-827700          | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:26 UTC | 23 Sep 24 11:26 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |                   |         |                     |                     |
	| addons  | addons-827700 addons                                                                        | addons-827700          | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:26 UTC | 23 Sep 24 11:26 UTC |
	|         | disable volumesnapshots                                                                     |                        |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:07:34
	Running on machine: minikube2
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:07:34.159627   11788 out.go:345] Setting OutFile to fd 884 ...
	I0923 11:07:34.238027   11788 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:07:34.238159   11788 out.go:358] Setting ErrFile to fd 888...
	I0923 11:07:34.238159   11788 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:07:34.263194   11788 out.go:352] Setting JSON to false
	I0923 11:07:34.267192   11788 start.go:129] hostinfo: {"hostname":"minikube2","uptime":522,"bootTime":1727089132,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0923 11:07:34.267396   11788 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 11:07:34.271495   11788 out.go:177] * [addons-827700] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 11:07:34.274764   11788 notify.go:220] Checking for updates...
	I0923 11:07:34.275674   11788 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0923 11:07:34.277252   11788 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:07:34.280865   11788 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0923 11:07:34.283292   11788 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 11:07:34.286009   11788 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:07:34.289337   11788 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:07:34.464424   11788 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0923 11:07:34.472214   11788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:07:34.783790   11788 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:66 SystemTime:2024-09-23 11:07:34.759247709 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 11:07:34.787797   11788 out.go:177] * Using the docker driver based on user configuration
	I0923 11:07:34.791790   11788 start.go:297] selected driver: docker
	I0923 11:07:34.791790   11788 start.go:901] validating driver "docker" against <nil>
	I0923 11:07:34.792794   11788 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:07:34.854987   11788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:07:35.163985   11788 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:66 SystemTime:2024-09-23 11:07:35.133105177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 11:07:35.164423   11788 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 11:07:35.165855   11788 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:07:35.169238   11788 out.go:177] * Using Docker Desktop driver with root privileges
	I0923 11:07:35.171609   11788 cni.go:84] Creating CNI manager for ""
	I0923 11:07:35.171609   11788 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:07:35.171609   11788 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 11:07:35.171609   11788 start.go:340] cluster config:
	{Name:addons-827700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-827700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:07:35.175348   11788 out.go:177] * Starting "addons-827700" primary control-plane node in "addons-827700" cluster
	I0923 11:07:35.177133   11788 cache.go:121] Beginning downloading kic base image for docker with docker
	I0923 11:07:35.180961   11788 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 11:07:35.184229   11788 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 11:07:35.184229   11788 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 11:07:35.184967   11788 preload.go:146] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 11:07:35.184967   11788 cache.go:56] Caching tarball of preloaded images
	I0923 11:07:35.184967   11788 preload.go:172] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 11:07:35.185695   11788 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 11:07:35.186282   11788 profile.go:143] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\config.json ...
	I0923 11:07:35.186599   11788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\config.json: {Name:mkfe798693076f12011d3a169f68b5bb3c9474e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:07:35.266324   11788 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 11:07:35.266408   11788 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726784731-19672@sha256_7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar
	I0923 11:07:35.266438   11788 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726784731-19672@sha256_7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar
	I0923 11:07:35.266438   11788 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 11:07:35.266438   11788 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 11:07:35.266438   11788 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 11:07:35.266438   11788 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 11:07:35.266438   11788 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0923 11:07:35.266438   11788 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726784731-19672@sha256_7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar
	I0923 11:08:48.075967   11788 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0923 11:08:48.076958   11788 cache.go:194] Successfully downloaded all kic artifacts
	I0923 11:08:48.076958   11788 start.go:360] acquireMachinesLock for addons-827700: {Name:mk46837ba8a0a533f49e0fdfcf06b7e23de3cac9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:08:48.076958   11788 start.go:364] duration metric: took 0s to acquireMachinesLock for "addons-827700"
	I0923 11:08:48.076958   11788 start.go:93] Provisioning new machine with config: &{Name:addons-827700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-827700 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 11:08:48.076958   11788 start.go:125] createHost starting for "" (driver="docker")
	I0923 11:08:48.082958   11788 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0923 11:08:48.082958   11788 start.go:159] libmachine.API.Create for "addons-827700" (driver="docker")
	I0923 11:08:48.082958   11788 client.go:168] LocalClient.Create starting
	I0923 11:08:48.083997   11788 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0923 11:08:48.218318   11788 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0923 11:08:48.321144   11788 cli_runner.go:164] Run: docker network inspect addons-827700 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0923 11:08:48.392641   11788 cli_runner.go:211] docker network inspect addons-827700 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0923 11:08:48.401422   11788 network_create.go:284] running [docker network inspect addons-827700] to gather additional debugging logs...
	I0923 11:08:48.401422   11788 cli_runner.go:164] Run: docker network inspect addons-827700
	W0923 11:08:48.469974   11788 cli_runner.go:211] docker network inspect addons-827700 returned with exit code 1
	I0923 11:08:48.470094   11788 network_create.go:287] error running [docker network inspect addons-827700]: docker network inspect addons-827700: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-827700 not found
	I0923 11:08:48.470094   11788 network_create.go:289] output of [docker network inspect addons-827700]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-827700 not found
	
	** /stderr **
	I0923 11:08:48.478435   11788 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 11:08:48.569735   11788 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cea810}
	I0923 11:08:48.569735   11788 network_create.go:124] attempt to create docker network addons-827700 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0923 11:08:48.577398   11788 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-827700 addons-827700
	I0923 11:08:48.762295   11788 network_create.go:108] docker network addons-827700 192.168.49.0/24 created
	I0923 11:08:48.762524   11788 kic.go:121] calculated static IP "192.168.49.2" for the "addons-827700" container
	I0923 11:08:48.777675   11788 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0923 11:08:48.859315   11788 cli_runner.go:164] Run: docker volume create addons-827700 --label name.minikube.sigs.k8s.io=addons-827700 --label created_by.minikube.sigs.k8s.io=true
	I0923 11:08:48.942052   11788 oci.go:103] Successfully created a docker volume addons-827700
	I0923 11:08:48.951512   11788 cli_runner.go:164] Run: docker run --rm --name addons-827700-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-827700 --entrypoint /usr/bin/test -v addons-827700:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0923 11:09:01.927349   11788 cli_runner.go:217] Completed: docker run --rm --name addons-827700-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-827700 --entrypoint /usr/bin/test -v addons-827700:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (12.9758164s)
	I0923 11:09:01.927349   11788 oci.go:107] Successfully prepared a docker volume addons-827700
	I0923 11:09:01.927349   11788 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 11:09:01.927918   11788 kic.go:194] Starting extracting preloaded images to volume ...
	I0923 11:09:01.938653   11788 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-827700:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0923 11:09:24.929790   11788 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-827700:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (22.9909443s)
	I0923 11:09:24.929852   11788 kic.go:203] duration metric: took 23.0024658s to extract preloaded images to volume ...
	I0923 11:09:24.938359   11788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:09:25.254725   11788 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:67 SystemTime:2024-09-23 11:09:25.22782543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaV
ersion:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://
github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 11:09:25.263313   11788 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0923 11:09:25.581718   11788 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-827700 --name addons-827700 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-827700 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-827700 --network addons-827700 --ip 192.168.49.2 --volume addons-827700:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0923 11:09:26.471583   11788 cli_runner.go:164] Run: docker container inspect addons-827700 --format={{.State.Running}}
	I0923 11:09:26.560209   11788 cli_runner.go:164] Run: docker container inspect addons-827700 --format={{.State.Status}}
	I0923 11:09:26.641211   11788 cli_runner.go:164] Run: docker exec addons-827700 stat /var/lib/dpkg/alternatives/iptables
	I0923 11:09:26.795259   11788 oci.go:144] the created container "addons-827700" has a running status.
	I0923 11:09:26.795259   11788 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-827700\id_rsa...
	I0923 11:09:27.270008   11788 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-827700\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0923 11:09:27.384239   11788 cli_runner.go:164] Run: docker container inspect addons-827700 --format={{.State.Status}}
	I0923 11:09:27.484265   11788 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0923 11:09:27.484265   11788 kic_runner.go:114] Args: [docker exec --privileged addons-827700 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0923 11:09:27.648257   11788 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-827700\id_rsa...
	I0923 11:09:30.542094   11788 cli_runner.go:164] Run: docker container inspect addons-827700 --format={{.State.Status}}
	I0923 11:09:30.625722   11788 machine.go:93] provisionDockerMachine start ...
	I0923 11:09:30.633711   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:09:30.706723   11788 main.go:141] libmachine: Using SSH client type: native
	I0923 11:09:30.717712   11788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x551bc0] 0x554700 <nil>  [] 0s} 127.0.0.1 53187 <nil> <nil>}
	I0923 11:09:30.717712   11788 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 11:09:30.952677   11788 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-827700
	
	I0923 11:09:30.952677   11788 ubuntu.go:169] provisioning hostname "addons-827700"
	I0923 11:09:30.961731   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:09:31.040696   11788 main.go:141] libmachine: Using SSH client type: native
	I0923 11:09:31.041709   11788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x551bc0] 0x554700 <nil>  [] 0s} 127.0.0.1 53187 <nil> <nil>}
	I0923 11:09:31.041709   11788 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-827700 && echo "addons-827700" | sudo tee /etc/hostname
	I0923 11:09:31.286646   11788 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-827700
	
	I0923 11:09:31.294841   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:09:31.376851   11788 main.go:141] libmachine: Using SSH client type: native
	I0923 11:09:31.377828   11788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x551bc0] 0x554700 <nil>  [] 0s} 127.0.0.1 53187 <nil> <nil>}
	I0923 11:09:31.377828   11788 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-827700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-827700/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-827700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 11:09:31.571671   11788 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:09:31.571671   11788 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube2\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube2\minikube-integration\.minikube}
	I0923 11:09:31.572836   11788 ubuntu.go:177] setting up certificates
	I0923 11:09:31.572836   11788 provision.go:84] configureAuth start
	I0923 11:09:31.581216   11788 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-827700
	I0923 11:09:31.651328   11788 provision.go:143] copyHostCerts
	I0923 11:09:31.651328   11788 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem (1675 bytes)
	I0923 11:09:31.652306   11788 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0923 11:09:31.654309   11788 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 11:09:31.654309   11788 provision.go:117] generating server cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-827700 san=[127.0.0.1 192.168.49.2 addons-827700 localhost minikube]
	I0923 11:09:31.817101   11788 provision.go:177] copyRemoteCerts
	I0923 11:09:31.831923   11788 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 11:09:31.839624   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:09:31.914930   11788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53187 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-827700\id_rsa Username:docker}
	I0923 11:09:32.050955   11788 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 11:09:32.108287   11788 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 11:09:32.153645   11788 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 11:09:32.203526   11788 provision.go:87] duration metric: took 630.6899ms to configureAuth
	I0923 11:09:32.203526   11788 ubuntu.go:193] setting minikube options for container-runtime
	I0923 11:09:32.204520   11788 config.go:182] Loaded profile config "addons-827700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:09:32.212073   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:09:32.292517   11788 main.go:141] libmachine: Using SSH client type: native
	I0923 11:09:32.292517   11788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x551bc0] 0x554700 <nil>  [] 0s} 127.0.0.1 53187 <nil> <nil>}
	I0923 11:09:32.292517   11788 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 11:09:32.490025   11788 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0923 11:09:32.490025   11788 ubuntu.go:71] root file system type: overlay
	I0923 11:09:32.490799   11788 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 11:09:32.499750   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:09:32.586672   11788 main.go:141] libmachine: Using SSH client type: native
	I0923 11:09:32.587666   11788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x551bc0] 0x554700 <nil>  [] 0s} 127.0.0.1 53187 <nil> <nil>}
	I0923 11:09:32.587666   11788 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 11:09:32.804344   11788 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 11:09:32.813302   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:09:32.899829   11788 main.go:141] libmachine: Using SSH client type: native
	I0923 11:09:32.900088   11788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x551bc0] 0x554700 <nil>  [] 0s} 127.0.0.1 53187 <nil> <nil>}
	I0923 11:09:32.900675   11788 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 11:09:34.577646   11788 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-19 14:24:32.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-23 11:09:32.791448773 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0923 11:09:34.577717   11788 machine.go:96] duration metric: took 3.9519882s to provisionDockerMachine
	I0923 11:09:34.577768   11788 client.go:171] duration metric: took 46.4947357s to LocalClient.Create
	I0923 11:09:34.577848   11788 start.go:167] duration metric: took 46.4947786s to libmachine.API.Create "addons-827700"
	I0923 11:09:34.577965   11788 start.go:293] postStartSetup for "addons-827700" (driver="docker")
	I0923 11:09:34.578000   11788 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 11:09:34.592743   11788 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 11:09:34.600744   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:09:34.670453   11788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53187 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-827700\id_rsa Username:docker}
	I0923 11:09:34.816787   11788 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 11:09:34.827939   11788 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 11:09:34.827939   11788 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 11:09:34.827939   11788 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 11:09:34.827939   11788 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 11:09:34.827939   11788 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\addons for local assets ...
	I0923 11:09:34.828712   11788 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\files for local assets ...
	I0923 11:09:34.828712   11788 start.go:296] duration metric: took 250.7115ms for postStartSetup
	I0923 11:09:34.839111   11788 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-827700
	I0923 11:09:34.912827   11788 profile.go:143] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\config.json ...
	I0923 11:09:34.928825   11788 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 11:09:34.938831   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:09:35.021101   11788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53187 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-827700\id_rsa Username:docker}
	I0923 11:09:35.173407   11788 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 11:09:35.187255   11788 start.go:128] duration metric: took 47.1101894s to createHost
	I0923 11:09:35.187255   11788 start.go:83] releasing machines lock for "addons-827700", held for 47.1102219s
	I0923 11:09:35.194971   11788 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-827700
	I0923 11:09:35.276692   11788 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 11:09:35.284672   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:09:35.287673   11788 ssh_runner.go:195] Run: cat /version.json
	I0923 11:09:35.294676   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:09:35.359701   11788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53187 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-827700\id_rsa Username:docker}
	I0923 11:09:35.359701   11788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53187 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-827700\id_rsa Username:docker}
	W0923 11:09:35.487319   11788 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 11:09:35.501670   11788 ssh_runner.go:195] Run: systemctl --version
	I0923 11:09:35.539311   11788 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 11:09:35.572573   11788 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0923 11:09:35.596117   11788 start.go:439] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0923 11:09:35.613943   11788 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0923 11:09:35.658806   11788 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W0923 11:09:35.658858   11788 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 11:09:35.684759   11788 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0923 11:09:35.684824   11788 start.go:495] detecting cgroup driver to use...
	I0923 11:09:35.684824   11788 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 11:09:35.684824   11788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:09:35.729884   11788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 11:09:35.766635   11788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 11:09:35.788684   11788 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 11:09:35.800512   11788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 11:09:35.835754   11788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:09:35.869278   11788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 11:09:35.906763   11788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:09:35.942827   11788 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 11:09:35.985727   11788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 11:09:36.022677   11788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 11:09:36.060523   11788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 11:09:36.096920   11788 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 11:09:36.131853   11788 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 11:09:36.161835   11788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:09:36.327743   11788 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 11:09:36.600890   11788 start.go:495] detecting cgroup driver to use...
	I0923 11:09:36.601046   11788 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 11:09:36.616736   11788 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 11:09:36.645698   11788 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0923 11:09:36.656697   11788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 11:09:36.683056   11788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:09:36.735108   11788 ssh_runner.go:195] Run: which cri-dockerd
	I0923 11:09:36.759097   11788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 11:09:36.783741   11788 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 11:09:36.829207   11788 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 11:09:37.030886   11788 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 11:09:37.234683   11788 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 11:09:37.234683   11788 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 11:09:37.285283   11788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:09:37.453719   11788 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 11:09:38.249333   11788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 11:09:38.293818   11788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 11:09:38.330319   11788 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 11:09:38.493817   11788 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 11:09:38.665171   11788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:09:38.832582   11788 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 11:09:38.870209   11788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 11:09:38.911100   11788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:09:39.083476   11788 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 11:09:39.489745   11788 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 11:09:39.505708   11788 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 11:09:39.519253   11788 start.go:563] Will wait 60s for crictl version
	I0923 11:09:39.531576   11788 ssh_runner.go:195] Run: which crictl
	I0923 11:09:39.556363   11788 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 11:09:39.790895   11788 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 11:09:39.800041   11788 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 11:09:39.997262   11788 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 11:09:40.061055   11788 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 11:09:40.072647   11788 cli_runner.go:164] Run: docker exec -t addons-827700 dig +short host.docker.internal
	I0923 11:09:40.466838   11788 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0923 11:09:40.480743   11788 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0923 11:09:40.493118   11788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 11:09:40.527160   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-827700
	I0923 11:09:40.601430   11788 kubeadm.go:883] updating cluster {Name:addons-827700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-827700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 11:09:40.602442   11788 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 11:09:40.610431   11788 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 11:09:40.663343   11788 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 11:09:40.663343   11788 docker.go:615] Images already preloaded, skipping extraction
	I0923 11:09:40.672167   11788 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 11:09:40.717648   11788 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 11:09:40.717774   11788 cache_images.go:84] Images are preloaded, skipping loading
	I0923 11:09:40.718044   11788 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0923 11:09:40.718207   11788 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-827700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-827700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 11:09:40.735143   11788 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 11:09:41.353609   11788 cni.go:84] Creating CNI manager for ""
	I0923 11:09:41.353609   11788 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:09:41.353609   11788 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 11:09:41.353609   11788 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-827700 NodeName:addons-827700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 11:09:41.353609   11788 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-827700"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 11:09:41.364605   11788 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 11:09:41.388946   11788 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 11:09:41.402036   11788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 11:09:41.422678   11788 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0923 11:09:41.457933   11788 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 11:09:41.491511   11788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0923 11:09:41.539955   11788 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0923 11:09:41.550596   11788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 11:09:41.582269   11788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:09:41.743716   11788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:09:41.774729   11788 certs.go:68] Setting up C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700 for IP: 192.168.49.2
	I0923 11:09:41.774800   11788 certs.go:194] generating shared ca certs ...
	I0923 11:09:41.774800   11788 certs.go:226] acquiring lock for ca certs: {Name:mka39b35711ce17aa627001b408a7adb2f266bbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:09:41.775386   11788 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key
	I0923 11:09:41.949933   11788 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt ...
	I0923 11:09:41.949933   11788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt: {Name:mkc5b851ca682f7aff857055d591694d36175fe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:09:41.951936   11788 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key ...
	I0923 11:09:41.951936   11788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key: {Name:mk9089fc50aceda2aa3f2747811085b675041b36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:09:41.952930   11788 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key
	I0923 11:09:42.155866   11788 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0923 11:09:42.155866   11788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mkd5c7d70e5d33d063f91e60ee9bd4852fbc5909 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:09:42.156405   11788 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key ...
	I0923 11:09:42.157430   11788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key: {Name:mkbb7b28a2f5e99a3e449ce85c8a848dee712fb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:09:42.158402   11788 certs.go:256] generating profile certs ...
	I0923 11:09:42.158676   11788 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\client.key
	I0923 11:09:42.158676   11788 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\client.crt with IP's: []
	I0923 11:09:42.246565   11788 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\client.crt ...
	I0923 11:09:42.246565   11788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\client.crt: {Name:mk6f9d9e4e8ae053c2a3497b8786141f4726de5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:09:42.247408   11788 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\client.key ...
	I0923 11:09:42.247408   11788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\client.key: {Name:mk4b1e6e2a858519d4c0e0b0fc4d2962b46c7278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:09:42.248413   11788 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\apiserver.key.f18d77ae
	I0923 11:09:42.249199   11788 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\apiserver.crt.f18d77ae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0923 11:09:42.405487   11788 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\apiserver.crt.f18d77ae ...
	I0923 11:09:42.405487   11788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\apiserver.crt.f18d77ae: {Name:mk5930f1eb2ec34d8e6fff43f61ee879c7c08f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:09:42.406430   11788 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\apiserver.key.f18d77ae ...
	I0923 11:09:42.406430   11788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\apiserver.key.f18d77ae: {Name:mk5fc73e6ebcb7545fa0b41c0e42e62afa628c60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:09:42.407691   11788 certs.go:381] copying C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\apiserver.crt.f18d77ae -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\apiserver.crt
	I0923 11:09:42.419717   11788 certs.go:385] copying C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\apiserver.key.f18d77ae -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\apiserver.key
	I0923 11:09:42.420714   11788 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\proxy-client.key
	I0923 11:09:42.420714   11788 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\proxy-client.crt with IP's: []
	I0923 11:09:42.838616   11788 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\proxy-client.crt ...
	I0923 11:09:42.838616   11788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\proxy-client.crt: {Name:mkbd97c29e2b04b7588fb423c7ea021f303f5bd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:09:42.839596   11788 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\proxy-client.key ...
	I0923 11:09:42.839596   11788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\proxy-client.key: {Name:mkac3569572501b01752d2922f0ee29eeeaa8164 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:09:42.852262   11788 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0923 11:09:42.852262   11788 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0923 11:09:42.852262   11788 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0923 11:09:42.853473   11788 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0923 11:09:42.856240   11788 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 11:09:42.913102   11788 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 11:09:42.959397   11788 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 11:09:43.010300   11788 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 11:09:43.079136   11788 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 11:09:43.132287   11788 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 11:09:43.183734   11788 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 11:09:43.228964   11788 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-827700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 11:09:43.275776   11788 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 11:09:43.327097   11788 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 11:09:43.376269   11788 ssh_runner.go:195] Run: openssl version
	I0923 11:09:43.409119   11788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 11:09:43.450428   11788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:09:43.463643   11788 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:09 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:09:43.476048   11788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:09:43.506891   11788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 11:09:43.543482   11788 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 11:09:43.555417   11788 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 11:09:43.555417   11788 kubeadm.go:392] StartCluster: {Name:addons-827700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-827700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:09:43.564949   11788 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 11:09:43.625120   11788 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 11:09:43.658169   11788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 11:09:43.678743   11788 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0923 11:09:43.690103   11788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 11:09:43.712742   11788 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 11:09:43.712742   11788 kubeadm.go:157] found existing configuration files:
	
	I0923 11:09:43.726340   11788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 11:09:43.750994   11788 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 11:09:43.764145   11788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 11:09:43.802648   11788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 11:09:43.823275   11788 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 11:09:43.836885   11788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 11:09:43.864879   11788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 11:09:43.887253   11788 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 11:09:43.900498   11788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 11:09:43.936849   11788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 11:09:43.962832   11788 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 11:09:43.977276   11788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 11:09:43.998901   11788 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0923 11:09:44.343356   11788 kubeadm.go:310] W0923 11:09:44.340264    1979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:09:44.343849   11788 kubeadm.go:310] W0923 11:09:44.341242    1979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:09:44.393822   11788 kubeadm.go:310] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I0923 11:09:44.526112   11788 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 11:09:59.533872   11788 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 11:09:59.533872   11788 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 11:09:59.534619   11788 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 11:09:59.534688   11788 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 11:09:59.535311   11788 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 11:09:59.535476   11788 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 11:09:59.538350   11788 out.go:235]   - Generating certificates and keys ...
	I0923 11:09:59.538903   11788 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 11:09:59.539080   11788 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 11:09:59.539983   11788 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 11:09:59.540272   11788 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 11:09:59.540365   11788 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 11:09:59.540708   11788 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 11:09:59.540708   11788 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 11:09:59.541793   11788 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-827700 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 11:09:59.542078   11788 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 11:09:59.542696   11788 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-827700 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 11:09:59.542992   11788 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 11:09:59.543262   11788 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 11:09:59.543688   11788 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 11:09:59.543859   11788 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 11:09:59.543859   11788 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 11:09:59.543859   11788 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 11:09:59.543859   11788 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 11:09:59.543859   11788 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 11:09:59.543859   11788 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 11:09:59.543859   11788 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 11:09:59.543859   11788 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 11:09:59.547747   11788 out.go:235]   - Booting up control plane ...
	I0923 11:09:59.547747   11788 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 11:09:59.547747   11788 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 11:09:59.548320   11788 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 11:09:59.548474   11788 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 11:09:59.548474   11788 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 11:09:59.548474   11788 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 11:09:59.548474   11788 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 11:09:59.548474   11788 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 11:09:59.549349   11788 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002644694s
	I0923 11:09:59.549349   11788 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 11:09:59.549349   11788 kubeadm.go:310] [api-check] The API server is healthy after 8.503051772s
	I0923 11:09:59.549349   11788 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 11:09:59.550375   11788 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 11:09:59.550375   11788 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 11:09:59.550375   11788 kubeadm.go:310] [mark-control-plane] Marking the node addons-827700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 11:09:59.550375   11788 kubeadm.go:310] [bootstrap-token] Using token: uzcsyl.8tmtcywd6hazr5od
	I0923 11:09:59.553321   11788 out.go:235]   - Configuring RBAC rules ...
	I0923 11:09:59.554321   11788 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 11:09:59.554321   11788 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 11:09:59.554321   11788 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 11:09:59.554321   11788 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 11:09:59.555324   11788 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 11:09:59.555324   11788 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 11:09:59.555324   11788 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 11:09:59.555324   11788 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 11:09:59.555324   11788 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 11:09:59.555324   11788 kubeadm.go:310] 
	I0923 11:09:59.556325   11788 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 11:09:59.556325   11788 kubeadm.go:310] 
	I0923 11:09:59.556325   11788 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 11:09:59.556325   11788 kubeadm.go:310] 
	I0923 11:09:59.556325   11788 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 11:09:59.556325   11788 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 11:09:59.556325   11788 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 11:09:59.556325   11788 kubeadm.go:310] 
	I0923 11:09:59.556325   11788 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 11:09:59.556325   11788 kubeadm.go:310] 
	I0923 11:09:59.556325   11788 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 11:09:59.556325   11788 kubeadm.go:310] 
	I0923 11:09:59.557313   11788 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 11:09:59.557313   11788 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 11:09:59.557313   11788 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 11:09:59.557313   11788 kubeadm.go:310] 
	I0923 11:09:59.557313   11788 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 11:09:59.557313   11788 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 11:09:59.557313   11788 kubeadm.go:310] 
	I0923 11:09:59.557313   11788 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uzcsyl.8tmtcywd6hazr5od \
	I0923 11:09:59.558308   11788 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c60087aea1b9c959a3bd352b721a35c36fcf11acd6b08291bdd22c7f2d03c7af \
	I0923 11:09:59.558308   11788 kubeadm.go:310] 	--control-plane 
	I0923 11:09:59.558308   11788 kubeadm.go:310] 
	I0923 11:09:59.558308   11788 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 11:09:59.558308   11788 kubeadm.go:310] 
	I0923 11:09:59.558308   11788 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uzcsyl.8tmtcywd6hazr5od \
	I0923 11:09:59.558308   11788 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c60087aea1b9c959a3bd352b721a35c36fcf11acd6b08291bdd22c7f2d03c7af 
	I0923 11:09:59.558308   11788 cni.go:84] Creating CNI manager for ""
	I0923 11:09:59.558308   11788 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:09:59.561316   11788 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 11:09:59.576307   11788 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 11:09:59.639786   11788 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 11:09:59.833146   11788 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 11:09:59.853230   11788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-827700 minikube.k8s.io/updated_at=2024_09_23T11_09_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=addons-827700 minikube.k8s.io/primary=true
	I0923 11:09:59.853230   11788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:09:59.861988   11788 ops.go:34] apiserver oom_adj: -16
	I0923 11:10:00.300408   11788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:10:00.800029   11788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:10:01.299381   11788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:10:01.803198   11788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:10:02.298672   11788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:10:02.799882   11788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:10:03.304576   11788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:10:03.796856   11788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:10:04.299897   11788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:10:04.464819   11788 kubeadm.go:1113] duration metric: took 4.6315557s to wait for elevateKubeSystemPrivileges
	I0923 11:10:04.464819   11788 kubeadm.go:394] duration metric: took 20.909367s to StartCluster
	I0923 11:10:04.464819   11788 settings.go:142] acquiring lock: {Name:mk9684611c6005d251a6ecf406b4611c2c1e30f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:10:04.465442   11788 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0923 11:10:04.466359   11788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\kubeconfig: {Name:mk7e72b8b9c82f9d87d6aed6af6962a1c1fa489d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:10:04.468419   11788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 11:10:04.469171   11788 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 11:10:04.469230   11788 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 11:10:04.469584   11788 addons.go:69] Setting yakd=true in profile "addons-827700"
	I0923 11:10:04.469674   11788 addons.go:234] Setting addon yakd=true in "addons-827700"
	I0923 11:10:04.469674   11788 config.go:182] Loaded profile config "addons-827700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:10:04.469674   11788 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-827700"
	I0923 11:10:04.469859   11788 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-827700"
	I0923 11:10:04.470097   11788 host.go:66] Checking if "addons-827700" exists ...
	I0923 11:10:04.470097   11788 host.go:66] Checking if "addons-827700" exists ...
	I0923 11:10:04.469674   11788 addons.go:69] Setting default-storageclass=true in profile "addons-827700"
	I0923 11:10:04.470239   11788 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-827700"
	I0923 11:10:04.470353   11788 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-827700"
	I0923 11:10:04.470353   11788 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-827700"
	I0923 11:10:04.470501   11788 host.go:66] Checking if "addons-827700" exists ...
	I0923 11:10:04.470610   11788 addons.go:69] Setting ingress-dns=true in profile "addons-827700"
	I0923 11:10:04.470610   11788 addons.go:234] Setting addon ingress-dns=true in "addons-827700"
	I0923 11:10:04.470823   11788 addons.go:69] Setting registry=true in profile "addons-827700"
	I0923 11:10:04.470823   11788 addons.go:234] Setting addon registry=true in "addons-827700"
	I0923 11:10:04.470823   11788 host.go:66] Checking if "addons-827700" exists ...
	I0923 11:10:04.470988   11788 host.go:66] Checking if "addons-827700" exists ...
	I0923 11:10:04.470988   11788 addons.go:69] Setting inspektor-gadget=true in profile "addons-827700"
	I0923 11:10:04.470988   11788 addons.go:234] Setting addon inspektor-gadget=true in "addons-827700"
	I0923 11:10:04.470988   11788 host.go:66] Checking if "addons-827700" exists ...
	I0923 11:10:04.470988   11788 addons.go:69] Setting gcp-auth=true in profile "addons-827700"
	I0923 11:10:04.470988   11788 mustload.go:65] Loading cluster: addons-827700
	I0923 11:10:04.471913   11788 addons.go:69] Setting metrics-server=true in profile "addons-827700"
	I0923 11:10:04.471913   11788 addons.go:234] Setting addon metrics-server=true in "addons-827700"
	I0923 11:10:04.472158   11788 host.go:66] Checking if "addons-827700" exists ...
	I0923 11:10:04.472205   11788 config.go:182] Loaded profile config "addons-827700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:10:04.472326   11788 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-827700"
	I0923 11:10:04.472392   11788 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-827700"
	I0923 11:10:04.472688   11788 addons.go:69] Setting volcano=true in profile "addons-827700"
	I0923 11:10:04.472811   11788 addons.go:234] Setting addon volcano=true in "addons-827700"
	I0923 11:10:04.472811   11788 addons.go:69] Setting storage-provisioner=true in profile "addons-827700"
	I0923 11:10:04.472919   11788 addons.go:234] Setting addon storage-provisioner=true in "addons-827700"
	I0923 11:10:04.472919   11788 host.go:66] Checking if "addons-827700" exists ...
	I0923 11:10:04.473383   11788 out.go:177] * Verifying Kubernetes components...
	I0923 11:10:04.473634   11788 host.go:66] Checking if "addons-827700" exists ...
	I0923 11:10:04.473634   11788 addons.go:69] Setting ingress=true in profile "addons-827700"
	I0923 11:10:04.473634   11788 addons.go:234] Setting addon ingress=true in "addons-827700"
	I0923 11:10:04.473862   11788 host.go:66] Checking if "addons-827700" exists ...
	I0923 11:10:04.474007   11788 addons.go:69] Setting volumesnapshots=true in profile "addons-827700"
	I0923 11:10:04.474144   11788 addons.go:234] Setting addon volumesnapshots=true in "addons-827700"
	I0923 11:10:04.474232   11788 host.go:66] Checking if "addons-827700" exists ...
	I0923 11:10:04.469674   11788 addons.go:69] Setting cloud-spanner=true in profile "addons-827700"
	I0923 11:10:04.474410   11788 addons.go:234] Setting addon cloud-spanner=true in "addons-827700"
	I0923 11:10:04.474950   11788 host.go:66] Checking if "addons-827700" exists ...
	I0923 11:10:04.508700   11788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:10:04.510914   11788 cli_runner.go:164] Run: docker container inspect addons-827700 --format={{.State.Status}}
	I0923 11:10:04.513698   11788 cli_runner.go:164] Run: docker container inspect addons-827700 --format={{.State.Status}}
	I0923 11:10:04.527439   11788 cli_runner.go:164] Run: docker container inspect addons-827700 --format={{.State.Status}}
	I0923 11:10:04.527829   11788 cli_runner.go:164] Run: docker container inspect addons-827700 --format={{.State.Status}}
	I0923 11:10:04.527829   11788 cli_runner.go:164] Run: docker container inspect addons-827700 --format={{.State.Status}}
	I0923 11:10:04.528477   11788 cli_runner.go:164] Run: docker container inspect addons-827700 --format={{.State.Status}}
	I0923 11:10:04.529424   11788 cli_runner.go:164] Run: docker container inspect addons-827700 --format={{.State.Status}}
	I0923 11:10:04.529424   11788 cli_runner.go:164] Run: docker container inspect addons-827700 --format={{.State.Status}}
	I0923 11:10:04.531411   11788 cli_runner.go:164] Run: docker container inspect addons-827700 --format={{.State.Status}}
	I0923 11:10:04.531495   11788 cli_runner.go:164] Run: docker container inspect addons-827700 --format={{.State.Status}}
	I0923 11:10:04.532113   11788 cli_runner.go:164] Run: docker container inspect addons-827700 --format={{.State.Status}}
	I0923 11:10:04.545712   11788 cli_runner.go:164] Run: docker container inspect addons-827700 --format={{.State.Status}}
	I0923 11:10:04.545712   11788 cli_runner.go:164] Run: docker container inspect addons-827700 --format={{.State.Status}}
	I0923 11:10:04.550017   11788 cli_runner.go:164] Run: docker container inspect addons-827700 --format={{.State.Status}}
	I0923 11:10:04.554735   11788 cli_runner.go:164] Run: docker container inspect addons-827700 --format={{.State.Status}}
	I0923 11:10:04.685242   11788 host.go:66] Checking if "addons-827700" exists ...
	I0923 11:10:04.687252   11788 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 11:10:04.687252   11788 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 11:10:04.688233   11788 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 11:10:04.690238   11788 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 11:10:04.693241   11788 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 11:10:04.694232   11788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 11:10:04.694232   11788 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 11:10:04.694232   11788 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 11:10:04.694232   11788 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 11:10:04.694232   11788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 11:10:04.695253   11788 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-827700"
	I0923 11:10:04.696231   11788 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 11:10:04.697256   11788 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 11:10:04.697256   11788 host.go:66] Checking if "addons-827700" exists ...
	I0923 11:10:04.699260   11788 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 11:10:04.701241   11788 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 11:10:04.704254   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-827700
	I0923 11:10:04.702262   11788 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 11:10:04.706289   11788 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 11:10:04.707269   11788 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 11:10:04.709245   11788 addons.go:234] Setting addon default-storageclass=true in "addons-827700"
	I0923 11:10:04.709245   11788 host.go:66] Checking if "addons-827700" exists ...
	I0923 11:10:04.709245   11788 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 11:10:04.709245   11788 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 11:10:04.710240   11788 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 11:10:04.710240   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:10:04.717238   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:10:04.717238   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "5000/tcp") 0).HostPort}}'" addons-827700
	I0923 11:10:04.717238   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:10:04.719247   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:10:04.719247   11788 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 11:10:04.728240   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:10:04.735960   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:10:04.736943   11788 cli_runner.go:164] Run: docker container inspect addons-827700 --format={{.State.Status}}
	I0923 11:10:04.743005   11788 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0923 11:10:04.745933   11788 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0923 11:10:04.753943   11788 cli_runner.go:164] Run: docker container inspect addons-827700 --format={{.State.Status}}
	I0923 11:10:04.753943   11788 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 11:10:04.761949   11788 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 11:10:04.765950   11788 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 11:10:04.769943   11788 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0923 11:10:04.770943   11788 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 11:10:04.775933   11788 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 11:10:04.774943   11788 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0923 11:10:04.780943   11788 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 11:10:04.783073   11788 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 11:10:04.783073   11788 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 11:10:04.783937   11788 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 11:10:04.783937   11788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 11:10:04.783937   11788 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0923 11:10:04.784938   11788 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:10:04.787939   11788 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 11:10:04.791942   11788 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:10:04.788937   11788 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 11:10:04.791942   11788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0923 11:10:04.791942   11788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 11:10:04.795943   11788 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:10:04.797935   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:10:04.798934   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:10:04.804934   11788 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:10:04.812011   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:10:04.818466   11788 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 11:10:04.818466   11788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 11:10:04.821400   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:10:04.844421   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:10:04.856435   11788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53187 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-827700\id_rsa Username:docker}
	I0923 11:10:04.861408   11788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53187 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-827700\id_rsa Username:docker}
	I0923 11:10:04.868404   11788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53187 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-827700\id_rsa Username:docker}
	I0923 11:10:04.879416   11788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53187 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-827700\id_rsa Username:docker}
	I0923 11:10:04.883400   11788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53187 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-827700\id_rsa Username:docker}
	I0923 11:10:04.883400   11788 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 11:10:04.884394   11788 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 11:10:04.889405   11788 out.go:201] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                      │
	│    Registry addon with docker driver uses port 53190 please use that instead of default port 5000    │
	│                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 11:10:04.892399   11788 out.go:177] * For more information see: https://minikube.sigs.k8s.io/docs/drivers/docker
	I0923 11:10:04.895432   11788 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 11:10:04.897409   11788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53187 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-827700\id_rsa Username:docker}
	I0923 11:10:04.898397   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:10:04.900398   11788 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 11:10:04.902401   11788 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 11:10:04.902401   11788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 11:10:04.909398   11788 out.go:177]   - Using image docker.io/busybox:stable
	I0923 11:10:04.911417   11788 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 11:10:04.914413   11788 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 11:10:04.914413   11788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 11:10:04.924402   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:10:04.928408   11788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53187 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-827700\id_rsa Username:docker}
	I0923 11:10:04.929424   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:10:04.943395   11788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53187 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-827700\id_rsa Username:docker}
	I0923 11:10:04.947404   11788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53187 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-827700\id_rsa Username:docker}
	I0923 11:10:04.954452   11788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53187 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-827700\id_rsa Username:docker}
	I0923 11:10:04.992396   11788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53187 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-827700\id_rsa Username:docker}
	I0923 11:10:04.996426   11788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53187 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-827700\id_rsa Username:docker}
	I0923 11:10:04.997412   11788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53187 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-827700\id_rsa Username:docker}
	I0923 11:10:05.001415   11788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53187 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-827700\id_rsa Username:docker}
	W0923 11:10:05.024458   11788 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 11:10:05.024700   11788 retry.go:31] will retry after 215.320511ms: ssh: handshake failed: EOF
	W0923 11:10:05.025043   11788 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 11:10:05.025043   11788 retry.go:31] will retry after 321.342523ms: ssh: handshake failed: EOF
	W0923 11:10:05.318322   11788 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 11:10:05.318322   11788 retry.go:31] will retry after 220.012726ms: ssh: handshake failed: EOF
	W0923 11:10:05.417644   11788 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 11:10:05.417644   11788 retry.go:31] will retry after 424.796753ms: ssh: handshake failed: EOF
	I0923 11:10:05.721918   11788 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.2534972s)
	I0923 11:10:05.722878   11788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 11:10:05.722878   11788 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.2141758s)
	I0923 11:10:05.740534   11788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:10:06.024042   11788 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 11:10:06.024042   11788 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 11:10:06.041639   11788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 11:10:06.123609   11788 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 11:10:06.123609   11788 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 11:10:06.145211   11788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 11:10:06.146592   11788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 11:10:06.147317   11788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 11:10:06.221993   11788 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 11:10:06.221993   11788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 11:10:06.222537   11788 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 11:10:06.222717   11788 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 11:10:06.238775   11788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:10:06.327183   11788 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 11:10:06.327183   11788 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 11:10:06.342607   11788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 11:10:06.343599   11788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 11:10:06.723000   11788 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 11:10:06.723000   11788 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 11:10:06.723121   11788 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 11:10:06.723000   11788 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 11:10:06.823531   11788 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 11:10:06.823604   11788 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 11:10:06.923035   11788 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 11:10:06.923035   11788 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 11:10:06.923035   11788 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 11:10:06.924216   11788 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 11:10:06.940879   11788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 11:10:07.318092   11788 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 11:10:07.318092   11788 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 11:10:07.318092   11788 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 11:10:07.318092   11788 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 11:10:07.522749   11788 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 11:10:07.522749   11788 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 11:10:07.527050   11788 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 11:10:07.527050   11788 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 11:10:07.624085   11788 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 11:10:07.624085   11788 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 11:10:07.625845   11788 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 11:10:07.625845   11788 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 11:10:07.919665   11788 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 11:10:07.919665   11788 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 11:10:08.023437   11788 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 11:10:08.023559   11788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 11:10:08.120336   11788 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 11:10:08.120336   11788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 11:10:08.224663   11788 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 11:10:08.224788   11788 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 11:10:08.339417   11788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 11:10:08.418204   11788 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 11:10:08.418204   11788 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 11:10:08.615443   11788 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 11:10:08.615443   11788 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 11:10:08.641184   11788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 11:10:08.837566   11788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 11:10:08.923386   11788 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 11:10:08.923418   11788 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 11:10:08.923727   11788 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:10:08.923805   11788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 11:10:09.122326   11788 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 11:10:09.122326   11788 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 11:10:09.339117   11788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:10:09.721251   11788 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 11:10:09.721251   11788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 11:10:09.721251   11788 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 11:10:09.721890   11788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 11:10:10.024934   11788 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.284393s)
	I0923 11:10:10.025423   11788 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.3024443s)
	I0923 11:10:10.025644   11788 start.go:971] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0923 11:10:10.035503   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-827700
	I0923 11:10:10.129335   11788 node_ready.go:35] waiting up to 6m0s for node "addons-827700" to be "Ready" ...
	I0923 11:10:10.323366   11788 node_ready.go:49] node "addons-827700" has status "Ready":"True"
	I0923 11:10:10.323398   11788 node_ready.go:38] duration metric: took 194.063ms for node "addons-827700" to be "Ready" ...
	I0923 11:10:10.323398   11788 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:10:10.439050   11788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 11:10:10.515134   11788 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 11:10:10.515383   11788 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 11:10:10.630802   11788 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4bw6t" in "kube-system" namespace to be "Ready" ...
	I0923 11:10:10.719095   11788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.6770493s)
	I0923 11:10:10.921223   11788 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-827700" context rescaled to 1 replicas
	I0923 11:10:11.120582   11788 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 11:10:11.120582   11788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 11:10:11.617164   11788 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 11:10:11.617164   11788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 11:10:12.322311   11788 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 11:10:12.322311   11788 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 11:10:12.835338   11788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 11:10:13.621596   11788 pod_ready.go:103] pod "coredns-7c65d6cfc9-4bw6t" in "kube-system" namespace has status "Ready":"False"
	I0923 11:10:15.115795   11788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.9691403s)
	I0923 11:10:15.115795   11788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.9705679s)
	I0923 11:10:16.018190   11788 pod_ready.go:103] pod "coredns-7c65d6cfc9-4bw6t" in "kube-system" namespace has status "Ready":"False"
	I0923 11:10:18.122600   11788 pod_ready.go:103] pod "coredns-7c65d6cfc9-4bw6t" in "kube-system" namespace has status "Ready":"False"
	I0923 11:10:19.731195   11788 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 11:10:19.738860   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:10:19.824430   11788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53187 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-827700\id_rsa Username:docker}
	I0923 11:10:20.722350   11788 pod_ready.go:103] pod "coredns-7c65d6cfc9-4bw6t" in "kube-system" namespace has status "Ready":"False"
	I0923 11:10:21.115964   11788 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 11:10:21.427674   11788 addons.go:234] Setting addon gcp-auth=true in "addons-827700"
	I0923 11:10:21.427879   11788 host.go:66] Checking if "addons-827700" exists ...
	I0923 11:10:21.449207   11788 cli_runner.go:164] Run: docker container inspect addons-827700 --format={{.State.Status}}
	I0923 11:10:21.546066   11788 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 11:10:21.553836   11788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-827700
	I0923 11:10:21.627004   11788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53187 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-827700\id_rsa Username:docker}
	I0923 11:10:23.027981   11788 pod_ready.go:103] pod "coredns-7c65d6cfc9-4bw6t" in "kube-system" namespace has status "Ready":"False"
	I0923 11:10:25.618747   11788 pod_ready.go:103] pod "coredns-7c65d6cfc9-4bw6t" in "kube-system" namespace has status "Ready":"False"
	I0923 11:10:27.923151   11788 pod_ready.go:103] pod "coredns-7c65d6cfc9-4bw6t" in "kube-system" namespace has status "Ready":"False"
	I0923 11:10:29.414309   11788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (23.2669573s)
	I0923 11:10:29.414309   11788 addons.go:475] Verifying addon ingress=true in "addons-827700"
	I0923 11:10:29.414309   11788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (23.175499s)
	I0923 11:10:29.415308   11788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (23.0716744s)
	I0923 11:10:29.421633   11788 out.go:177] * Verifying ingress addon...
	I0923 11:10:29.430294   11788 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 11:10:29.524343   11788 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 11:10:29.524343   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:29.727286   11788 pod_ready.go:98] pod "coredns-7c65d6cfc9-4bw6t" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:10:28 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:10:05 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:10:05 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:10:05 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:10:04 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.3 PodI
Ps:[{IP:10.244.0.3}] StartTime:2024-09-23 11:10:05 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2024-09-23 11:10:19 +0000 UTC,FinishedAt:2024-09-23 11:10:25 +0000 UTC,ContainerID:docker://3a234280fa7b8828d430c8ee44417d9f8d79165b6a119195d3be73e9afe7f5e8,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://3a234280fa7b8828d430c8ee44417d9f8d79165b6a119195d3be73e9afe7f5e8 Started:0xc00254e5fc AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002263f40} {Name:kube-api-access-2zrjf MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc002263f50}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0923 11:10:29.727286   11788 pod_ready.go:82] duration metric: took 19.0964561s for pod "coredns-7c65d6cfc9-4bw6t" in "kube-system" namespace to be "Ready" ...
	E0923 11:10:29.727286   11788 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-4bw6t" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:10:28 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:10:05 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:10:05 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:10:05 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:10:04 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168
.49.2}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-23 11:10:05 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2024-09-23 11:10:19 +0000 UTC,FinishedAt:2024-09-23 11:10:25 +0000 UTC,ContainerID:docker://3a234280fa7b8828d430c8ee44417d9f8d79165b6a119195d3be73e9afe7f5e8,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://3a234280fa7b8828d430c8ee44417d9f8d79165b6a119195d3be73e9afe7f5e8 Started:0xc00254e5fc AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002263f40} {Name:kube-api-access-2zrjf MountPath:/var/run/secrets/kubernetes.io/serviceaccoun
t ReadOnly:true RecursiveReadOnly:0xc002263f50}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0923 11:10:29.727286   11788 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gntkt" in "kube-system" namespace to be "Ready" ...
	I0923 11:10:30.073379   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:30.522041   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:31.032093   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:31.526634   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:31.817621   11788 pod_ready.go:103] pod "coredns-7c65d6cfc9-gntkt" in "kube-system" namespace has status "Ready":"False"
	I0923 11:10:32.027621   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:32.716238   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:33.023431   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:33.534035   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:33.924673   11788 pod_ready.go:103] pod "coredns-7c65d6cfc9-gntkt" in "kube-system" namespace has status "Ready":"False"
	I0923 11:10:34.023700   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:34.629650   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:35.131103   11788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (28.190184s)
	I0923 11:10:35.131103   11788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (28.7884556s)
	I0923 11:10:35.131103   11788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (26.7916485s)
	I0923 11:10:35.131103   11788 addons.go:475] Verifying addon metrics-server=true in "addons-827700"
	I0923 11:10:35.131103   11788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (26.4898825s)
	I0923 11:10:35.131103   11788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (26.2935006s)
	I0923 11:10:35.131103   11788 addons.go:475] Verifying addon registry=true in "addons-827700"
	I0923 11:10:35.131790   11788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (25.7925609s)
	W0923 11:10:35.131790   11788 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 11:10:35.131942   11788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (24.6927068s)
	I0923 11:10:35.134832   11788 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-827700 service yakd-dashboard -n yakd-dashboard
	
	I0923 11:10:35.135192   11788 retry.go:31] will retry after 271.284927ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 11:10:35.137737   11788 out.go:177] * Verifying registry addon...
	I0923 11:10:35.145627   11788 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 11:10:35.216741   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:35.319723   11788 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 11:10:35.319723   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:35.433745   11788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:10:35.520473   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:35.816068   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:36.019406   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:36.218055   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:36.327093   11788 pod_ready.go:103] pod "coredns-7c65d6cfc9-gntkt" in "kube-system" namespace has status "Ready":"False"
	I0923 11:10:36.620226   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:36.923612   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:37.125633   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:37.315071   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:37.526243   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:37.530613   11788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (24.6952432s)
	I0923 11:10:37.530613   11788 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (15.9845299s)
	I0923 11:10:37.530613   11788 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-827700"
	I0923 11:10:37.559349   11788 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:10:37.570077   11788 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 11:10:37.611930   11788 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 11:10:37.612636   11788 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 11:10:37.632042   11788 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 11:10:37.632135   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:37.656567   11788 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 11:10:37.656567   11788 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 11:10:37.812346   11788 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 11:10:37.812520   11788 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 11:10:37.918130   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:38.021588   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:38.028926   11788 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 11:10:38.028926   11788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 11:10:38.142218   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:38.221758   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:38.346754   11788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 11:10:38.426890   11788 pod_ready.go:103] pod "coredns-7c65d6cfc9-gntkt" in "kube-system" namespace has status "Ready":"False"
	I0923 11:10:38.517867   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:38.623348   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:38.714844   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:38.940914   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:39.127140   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:39.217636   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:39.519070   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:39.625689   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:39.717600   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:39.940010   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:40.123886   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:40.218918   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:40.521679   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:40.629819   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:40.717766   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:40.825490   11788 pod_ready.go:103] pod "coredns-7c65d6cfc9-gntkt" in "kube-system" namespace has status "Ready":"False"
	I0923 11:10:41.030294   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:41.219084   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:41.219631   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:41.428211   11788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.9944584s)
	I0923 11:10:41.516563   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:41.643832   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:41.727238   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:41.942850   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:42.132807   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:42.215886   11788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (3.869126s)
	I0923 11:10:42.226359   11788 addons.go:475] Verifying addon gcp-auth=true in "addons-827700"
	I0923 11:10:42.226359   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:42.236760   11788 out.go:177] * Verifying gcp-auth addon...
	I0923 11:10:42.243597   11788 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 11:10:42.329655   11788 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 11:10:42.442935   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:42.622690   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:42.725900   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:42.941059   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:43.126685   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:43.156040   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:43.249435   11788 pod_ready.go:103] pod "coredns-7c65d6cfc9-gntkt" in "kube-system" namespace has status "Ready":"False"
	I0923 11:10:43.440731   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:43.622911   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:43.657931   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:43.940093   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:44.120751   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:44.155535   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:44.441569   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:44.623489   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:44.656040   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:44.940484   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:45.121648   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:45.152925   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:45.439756   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:45.621881   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:45.654183   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:45.741598   11788 pod_ready.go:103] pod "coredns-7c65d6cfc9-gntkt" in "kube-system" namespace has status "Ready":"False"
	I0923 11:10:45.940143   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:46.120262   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:46.153965   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:46.439192   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:46.622593   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:46.654096   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:46.938257   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:47.121632   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:47.153103   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:47.440333   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:47.622549   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:47.652871   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:47.743948   11788 pod_ready.go:103] pod "coredns-7c65d6cfc9-gntkt" in "kube-system" namespace has status "Ready":"False"
	I0923 11:10:47.939306   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:48.141132   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:48.217837   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:48.439805   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:48.622707   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:48.653025   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:48.939573   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:49.127685   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:49.151823   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:49.439601   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:49.624568   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:49.652429   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:49.744605   11788 pod_ready.go:103] pod "coredns-7c65d6cfc9-gntkt" in "kube-system" namespace has status "Ready":"False"
	I0923 11:10:49.939173   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:50.123087   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:50.153007   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:50.441740   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:50.621743   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:50.653150   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:50.941033   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:51.123883   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:51.152155   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:51.439923   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:51.622476   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:51.652727   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:51.939610   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:52.122795   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:52.153008   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:52.242483   11788 pod_ready.go:103] pod "coredns-7c65d6cfc9-gntkt" in "kube-system" namespace has status "Ready":"False"
	I0923 11:10:52.439490   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:52.622932   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:52.653615   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:52.940876   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:53.121727   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:53.153151   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:53.439472   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:53.628082   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:53.653681   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:53.939058   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:54.121703   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:54.153769   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:54.243672   11788 pod_ready.go:103] pod "coredns-7c65d6cfc9-gntkt" in "kube-system" namespace has status "Ready":"False"
	I0923 11:10:54.439497   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:54.629867   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:54.725101   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:54.743029   11788 pod_ready.go:93] pod "coredns-7c65d6cfc9-gntkt" in "kube-system" namespace has status "Ready":"True"
	I0923 11:10:54.743093   11788 pod_ready.go:82] duration metric: took 25.0157694s for pod "coredns-7c65d6cfc9-gntkt" in "kube-system" namespace to be "Ready" ...
	I0923 11:10:54.743208   11788 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-827700" in "kube-system" namespace to be "Ready" ...
	I0923 11:10:54.754934   11788 pod_ready.go:93] pod "etcd-addons-827700" in "kube-system" namespace has status "Ready":"True"
	I0923 11:10:54.754934   11788 pod_ready.go:82] duration metric: took 11.7268ms for pod "etcd-addons-827700" in "kube-system" namespace to be "Ready" ...
	I0923 11:10:54.754934   11788 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-827700" in "kube-system" namespace to be "Ready" ...
	I0923 11:10:54.819040   11788 pod_ready.go:93] pod "kube-apiserver-addons-827700" in "kube-system" namespace has status "Ready":"True"
	I0923 11:10:54.819040   11788 pod_ready.go:82] duration metric: took 64.1053ms for pod "kube-apiserver-addons-827700" in "kube-system" namespace to be "Ready" ...
	I0923 11:10:54.819040   11788 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-827700" in "kube-system" namespace to be "Ready" ...
	I0923 11:10:54.841413   11788 pod_ready.go:93] pod "kube-controller-manager-addons-827700" in "kube-system" namespace has status "Ready":"True"
	I0923 11:10:54.841581   11788 pod_ready.go:82] duration metric: took 22.4897ms for pod "kube-controller-manager-addons-827700" in "kube-system" namespace to be "Ready" ...
	I0923 11:10:54.841581   11788 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-84526" in "kube-system" namespace to be "Ready" ...
	I0923 11:10:54.856756   11788 pod_ready.go:93] pod "kube-proxy-84526" in "kube-system" namespace has status "Ready":"True"
	I0923 11:10:54.856786   11788 pod_ready.go:82] duration metric: took 15.2052ms for pod "kube-proxy-84526" in "kube-system" namespace to be "Ready" ...
	I0923 11:10:54.856786   11788 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-827700" in "kube-system" namespace to be "Ready" ...
	I0923 11:10:54.942568   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:55.121072   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:55.134749   11788 pod_ready.go:93] pod "kube-scheduler-addons-827700" in "kube-system" namespace has status "Ready":"True"
	I0923 11:10:55.134817   11788 pod_ready.go:82] duration metric: took 278.0308ms for pod "kube-scheduler-addons-827700" in "kube-system" namespace to be "Ready" ...
	I0923 11:10:55.134878   11788 pod_ready.go:39] duration metric: took 44.811413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:10:55.134878   11788 api_server.go:52] waiting for apiserver process to appear ...
	I0923 11:10:55.152740   11788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:10:55.156130   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:55.226859   11788 api_server.go:72] duration metric: took 50.7574743s to wait for apiserver process to appear ...
	I0923 11:10:55.226986   11788 api_server.go:88] waiting for apiserver healthz status ...
	I0923 11:10:55.226986   11788 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:53186/healthz ...
	I0923 11:10:55.245030   11788 api_server.go:279] https://127.0.0.1:53186/healthz returned 200:
	ok
	I0923 11:10:55.248479   11788 api_server.go:141] control plane version: v1.31.1
	I0923 11:10:55.248479   11788 api_server.go:131] duration metric: took 21.4926ms to wait for apiserver health ...
	I0923 11:10:55.248479   11788 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 11:10:55.344528   11788 system_pods.go:59] 17 kube-system pods found
	I0923 11:10:55.344651   11788 system_pods.go:61] "coredns-7c65d6cfc9-gntkt" [14b88a56-8c95-4545-977d-577a8c848904] Running
	I0923 11:10:55.344651   11788 system_pods.go:61] "csi-hostpath-attacher-0" [542a0f40-0e84-4ea0-b280-a4955500fc0f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 11:10:55.344712   11788 system_pods.go:61] "csi-hostpath-resizer-0" [12613199-dc46-4246-93d2-17fbe6632292] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 11:10:55.344712   11788 system_pods.go:61] "csi-hostpathplugin-7zbkt" [97067601-d071-412d-a29b-565cb4222fe6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 11:10:55.344712   11788 system_pods.go:61] "etcd-addons-827700" [9b644347-33a0-4a7e-a485-0d34c2ba190a] Running
	I0923 11:10:55.344712   11788 system_pods.go:61] "kube-apiserver-addons-827700" [7ba9bba8-2538-443b-a00c-ad9b49da94e3] Running
	I0923 11:10:55.344712   11788 system_pods.go:61] "kube-controller-manager-addons-827700" [46f23585-d66a-4836-a979-48f5de0be2a7] Running
	I0923 11:10:55.344787   11788 system_pods.go:61] "kube-ingress-dns-minikube" [cb8eb694-1b31-46c5-afa0-8f140bb5ce84] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0923 11:10:55.344787   11788 system_pods.go:61] "kube-proxy-84526" [4ff00d0b-48bc-4512-8af6-310f77b2f459] Running
	I0923 11:10:55.344787   11788 system_pods.go:61] "kube-scheduler-addons-827700" [d4aee0fa-5baf-4392-aad1-ced6337f47c2] Running
	I0923 11:10:55.344838   11788 system_pods.go:61] "metrics-server-84c5f94fbc-pb9f5" [a8e12ecd-fbb1-43b6-ad32-62445b93b363] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 11:10:55.344838   11788 system_pods.go:61] "nvidia-device-plugin-daemonset-4p2zq" [b5c30d33-8dae-49de-a646-f149449da74f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0923 11:10:55.344899   11788 system_pods.go:61] "registry-66c9cd494c-kw8kk" [1add3bf4-bfb5-4032-8085-4db4e3c3010d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 11:10:55.344899   11788 system_pods.go:61] "registry-proxy-7l6xx" [6814b759-8ade-4cc6-b7f6-c4c91d60c390] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 11:10:55.344961   11788 system_pods.go:61] "snapshot-controller-56fcc65765-5drg8" [f48dba33-f6a3-4145-9b85-2d8be9e3f9fb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:10:55.344961   11788 system_pods.go:61] "snapshot-controller-56fcc65765-gwb5c" [3d8eee02-a810-4ebb-a0fa-ec8c15b7f653] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:10:55.344961   11788 system_pods.go:61] "storage-provisioner" [568c5974-0722-43d8-9dfe-8430342721ec] Running
	I0923 11:10:55.345028   11788 system_pods.go:74] duration metric: took 96.4818ms to wait for pod list to return data ...
	I0923 11:10:55.345028   11788 default_sa.go:34] waiting for default service account to be created ...
	I0923 11:10:55.438584   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:55.536077   11788 default_sa.go:45] found service account: "default"
	I0923 11:10:55.536077   11788 default_sa.go:55] duration metric: took 191.048ms for default service account to be created ...
	I0923 11:10:55.536077   11788 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 11:10:55.621757   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:55.653712   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:55.745347   11788 system_pods.go:86] 17 kube-system pods found
	I0923 11:10:55.745902   11788 system_pods.go:89] "coredns-7c65d6cfc9-gntkt" [14b88a56-8c95-4545-977d-577a8c848904] Running
	I0923 11:10:55.745902   11788 system_pods.go:89] "csi-hostpath-attacher-0" [542a0f40-0e84-4ea0-b280-a4955500fc0f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 11:10:55.745902   11788 system_pods.go:89] "csi-hostpath-resizer-0" [12613199-dc46-4246-93d2-17fbe6632292] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 11:10:55.745902   11788 system_pods.go:89] "csi-hostpathplugin-7zbkt" [97067601-d071-412d-a29b-565cb4222fe6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 11:10:55.745902   11788 system_pods.go:89] "etcd-addons-827700" [9b644347-33a0-4a7e-a485-0d34c2ba190a] Running
	I0923 11:10:55.745902   11788 system_pods.go:89] "kube-apiserver-addons-827700" [7ba9bba8-2538-443b-a00c-ad9b49da94e3] Running
	I0923 11:10:55.745902   11788 system_pods.go:89] "kube-controller-manager-addons-827700" [46f23585-d66a-4836-a979-48f5de0be2a7] Running
	I0923 11:10:55.746044   11788 system_pods.go:89] "kube-ingress-dns-minikube" [cb8eb694-1b31-46c5-afa0-8f140bb5ce84] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0923 11:10:55.746044   11788 system_pods.go:89] "kube-proxy-84526" [4ff00d0b-48bc-4512-8af6-310f77b2f459] Running
	I0923 11:10:55.746102   11788 system_pods.go:89] "kube-scheduler-addons-827700" [d4aee0fa-5baf-4392-aad1-ced6337f47c2] Running
	I0923 11:10:55.746102   11788 system_pods.go:89] "metrics-server-84c5f94fbc-pb9f5" [a8e12ecd-fbb1-43b6-ad32-62445b93b363] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 11:10:55.746146   11788 system_pods.go:89] "nvidia-device-plugin-daemonset-4p2zq" [b5c30d33-8dae-49de-a646-f149449da74f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0923 11:10:55.746190   11788 system_pods.go:89] "registry-66c9cd494c-kw8kk" [1add3bf4-bfb5-4032-8085-4db4e3c3010d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 11:10:55.746190   11788 system_pods.go:89] "registry-proxy-7l6xx" [6814b759-8ade-4cc6-b7f6-c4c91d60c390] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 11:10:55.746190   11788 system_pods.go:89] "snapshot-controller-56fcc65765-5drg8" [f48dba33-f6a3-4145-9b85-2d8be9e3f9fb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:10:55.746190   11788 system_pods.go:89] "snapshot-controller-56fcc65765-gwb5c" [3d8eee02-a810-4ebb-a0fa-ec8c15b7f653] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:10:55.746190   11788 system_pods.go:89] "storage-provisioner" [568c5974-0722-43d8-9dfe-8430342721ec] Running
	I0923 11:10:55.746190   11788 system_pods.go:126] duration metric: took 210.1128ms to wait for k8s-apps to be running ...
	I0923 11:10:55.746190   11788 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 11:10:55.760185   11788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 11:10:55.816362   11788 system_svc.go:56] duration metric: took 70.0349ms WaitForService to wait for kubelet
	I0923 11:10:55.816362   11788 kubeadm.go:582] duration metric: took 51.3469764s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:10:55.816477   11788 node_conditions.go:102] verifying NodePressure condition ...
	I0923 11:10:55.936846   11788 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0923 11:10:55.937127   11788 node_conditions.go:123] node cpu capacity is 16
	I0923 11:10:55.937205   11788 node_conditions.go:105] duration metric: took 120.7276ms to run NodePressure ...
	I0923 11:10:55.937247   11788 start.go:241] waiting for startup goroutines ...
	I0923 11:10:55.939216   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:56.121956   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:56.152140   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:56.439070   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:56.621761   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:56.651904   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:56.941383   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:57.122263   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:57.154917   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:57.439569   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:57.624792   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:57.650952   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:57.940124   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:58.123758   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:58.172642   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:58.441438   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:58.622874   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:58.653566   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:59.029562   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:59.122077   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:59.322152   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:59.440972   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:10:59.620094   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:10:59.652467   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:10:59.939488   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:00.121003   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:00.153547   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:00.439195   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:00.829274   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:00.831284   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:00.944819   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:01.126790   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:01.152497   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:01.444708   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:01.620577   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:01.653902   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:01.942781   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:02.122962   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:02.155941   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:02.440967   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:02.620307   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:02.653383   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:02.939287   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:03.123304   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:03.159288   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:03.441386   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:03.622776   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:03.653782   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:03.947845   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:04.123837   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:04.153842   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:04.439840   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:04.621882   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:04.652848   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:05.027431   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:05.128531   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:05.233143   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:05.440563   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:05.624958   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:05.668514   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:05.942085   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:06.121634   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:06.153369   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:06.440100   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:06.621800   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:06.652217   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:06.940015   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:07.122830   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:07.153435   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:07.440451   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:07.621198   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:07.654317   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:07.940744   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:08.124760   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:08.154740   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:08.440618   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:08.622564   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:08.652565   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:08.939471   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:09.122117   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:09.153083   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:09.438478   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:09.620736   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:09.653344   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:09.938826   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:10.121469   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:10.165374   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:10.439069   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:10.622934   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:10.653878   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:10.942746   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:11.121024   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:11.153140   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:11.441188   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:11.627913   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:11.723054   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:11.939625   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:12.124449   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:12.154468   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:12.613059   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:12.623063   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:12.710961   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:12.940356   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:13.128927   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:13.152939   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:13.443564   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:13.622552   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:13.653568   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:13.939446   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:14.122457   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:14.153445   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:14.440856   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:14.623870   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:14.652871   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:14.940880   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:15.125436   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:15.153114   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:15.440597   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:15.622934   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:15.655773   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:15.938808   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:16.123183   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:16.155166   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:16.439289   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:16.622010   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:16.653399   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:16.938308   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:17.122029   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:17.152119   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:17.439311   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:17.621412   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:17.654009   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:17.939907   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:18.123272   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:18.152867   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:18.439158   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:18.620198   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:18.652562   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:18.939432   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:19.123075   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:19.155091   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:19.440075   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:19.620918   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:19.652616   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:19.939391   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:20.121439   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:20.150924   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:20.439442   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:20.621121   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:20.652976   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:20.939916   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:21.122074   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:21.152812   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:21.441612   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:21.622189   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:21.651940   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:21.938895   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:22.122196   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:22.155362   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:22.439008   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:22.623761   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:22.653204   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:22.940056   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:23.121681   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:23.152994   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:23.439401   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:23.621635   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:23.654633   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:23.939816   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:24.124800   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:24.153222   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:24.439340   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:24.626063   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:24.653186   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:24.939803   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:25.122797   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:25.153186   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:25.441634   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:25.621115   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:25.653366   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:25.939551   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:26.121884   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:26.153483   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:26.438306   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:26.621719   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:26.652271   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:26.938078   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:27.122306   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:27.152557   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:27.439882   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:27.622020   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:27.652029   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:27.938950   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:28.121218   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:28.161395   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:28.438649   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:28.621408   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:28.653074   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:28.940089   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:29.121908   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:29.155787   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:29.439565   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:29.622067   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:29.652652   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:29.938669   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:30.122319   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:30.153123   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:30.439082   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:30.621736   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:30.652835   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:30.946891   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:31.121059   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:31.153142   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:31.439565   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:31.621855   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:31.651901   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:31.939480   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:32.121588   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:32.152707   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:32.438651   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:32.623105   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:32.662061   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:32.939136   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:33.122846   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:33.160009   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:33.446028   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:33.620607   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:33.654477   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:33.939556   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:34.122163   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:34.152253   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:34.438824   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:34.621843   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:34.653051   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:34.938247   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:35.123131   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:35.153084   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:35.439804   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:35.623201   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:35.652214   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:35.938876   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:36.120911   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:36.152610   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:36.439687   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:36.621945   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:36.653822   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:36.940167   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:37.122847   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:37.153430   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:37.438518   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:37.622628   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:37.657104   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:38.119846   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:38.206335   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:38.210624   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:38.618454   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:38.626393   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:38.652408   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:38.939783   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:39.120879   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:39.154331   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:39.439708   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:39.622811   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:39.916630   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:40.528144   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:40.529426   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:40.531381   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:40.539552   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:40.690028   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:40.691316   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:40.940344   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:41.123951   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:41.153408   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:41.441440   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:41.623708   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:41.707714   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:41.941917   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:42.124482   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:42.153694   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:42.439539   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:42.621790   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:42.654833   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:42.940941   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:43.122585   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:43.209162   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:43.441091   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:43.621515   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:43.651549   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:43.941808   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:44.121599   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:44.152897   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:44.469271   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:44.625739   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:44.653242   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:44.943921   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:45.121401   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:45.229406   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:45.439818   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:45.626001   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:45.653464   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:45.941117   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:46.121025   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:46.152834   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:46.441937   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:46.620247   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:46.652372   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:46.939580   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:47.123954   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:47.157189   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:47.444347   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:47.623394   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:47.653357   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:47.939074   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:48.125066   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:48.155074   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:48.439539   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:48.622721   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:48.653658   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:48.940271   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:49.121739   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:49.152533   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:49.443654   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:49.629834   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:49.708681   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:49.940385   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:50.121238   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:50.153710   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:50.460225   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:50.625325   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:50.653922   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:50.939969   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:51.136308   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:51.153972   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:51.439524   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:51.627621   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:51.658308   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:51.941405   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:52.123417   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:52.224395   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:52.439378   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:52.621859   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:52.652337   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:52.941864   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:53.122452   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:53.211886   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:53.438351   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:53.623994   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:53.652256   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:53.938693   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:54.121839   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:54.208948   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:54.440365   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:54.622037   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:54.835897   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:54.939262   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:55.123117   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:55.220965   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:55.441166   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:55.622023   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:55.652870   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:55.939368   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:56.121080   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:56.152035   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:56.438642   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:56.620099   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:56.652350   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:56.939138   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:57.121640   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:57.151842   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:57.439101   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:57.621493   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:57.652722   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:57.939739   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:58.137141   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:58.153372   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:58.439448   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:58.621753   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:58.653638   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:58.938829   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:59.121966   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:59.152279   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:59.439162   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:11:59.621720   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:11:59.652418   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:11:59.939257   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:00.122050   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:00.152517   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:00.439407   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:00.621692   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:00.652119   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:00.940163   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:01.120959   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:01.152240   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:01.439997   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:01.620287   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:01.652402   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:01.938925   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:02.121080   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:02.152248   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:02.440113   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:02.621476   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:02.652253   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:02.939637   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:03.121355   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:03.153362   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:03.439161   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:03.621512   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:03.652533   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:03.939549   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:04.121342   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:04.153198   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:04.444526   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:04.621748   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:04.652152   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:04.938637   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:05.121983   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:05.151659   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:05.438468   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:05.620136   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:05.651628   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:05.939496   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:06.122887   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:06.152045   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:06.439220   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:06.620465   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:06.653382   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:06.939452   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:07.122821   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:07.152017   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:07.438507   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:07.622270   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:07.652698   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:07.941509   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:08.122079   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:08.153834   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:08.442915   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:08.622196   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:08.652905   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:08.940992   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:09.122059   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:09.152229   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:09.438738   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:09.621486   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:09.652489   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:09.941307   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:10.121775   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:10.152398   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:10.439410   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:10.622239   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:10.653263   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:10.940455   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:11.122292   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:11.153208   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:11.439704   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:11.621434   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:11.653415   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:11.939867   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:12.122630   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:12.153223   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:12.440665   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:12.621913   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:12.651911   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:12.939901   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:13.123992   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:13.153584   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:13.440021   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:13.621986   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:13.655522   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:13.939484   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:14.121648   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:14.152982   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:14.438652   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:14.621715   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:14.653055   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:14.937571   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:15.121766   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:15.152360   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:15.439146   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:15.622171   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:15.652241   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:15.942965   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:16.120931   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:16.152501   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:16.439004   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:16.620357   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:16.652750   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:16.938733   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:17.121899   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:17.153401   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:17.438672   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:17.623835   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:17.652642   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:17.939089   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:18.122092   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:18.153947   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:18.439105   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:18.621804   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:18.653582   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:18.937925   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:19.121563   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:19.152652   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:19.439968   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:19.621989   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:19.652144   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:19.940383   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:20.121903   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:20.152796   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:20.439089   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:20.620626   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:20.654422   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:20.939184   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:21.122095   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:21.153615   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:21.439055   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:21.621003   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:21.652866   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:21.940944   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:22.121956   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:22.152844   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:22.448325   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:22.621564   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:22.653634   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:22.943680   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:23.120726   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:23.157635   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:23.440376   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:23.621966   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:23.652569   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:23.941011   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:24.121976   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:24.154302   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:24.439680   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:24.620463   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:24.653005   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:24.939614   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:25.122051   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:25.151855   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:25.441112   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:25.621790   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:25.652941   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:25.941698   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:26.123778   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:26.153145   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:26.440959   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:26.621929   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:26.653925   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:26.938974   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:27.123520   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:27.154511   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:27.442509   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:27.623531   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:27.652527   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:27.940517   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:28.136554   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:28.234511   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:28.440286   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:28.621279   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:28.653315   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:28.941281   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:29.122289   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:29.154315   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:29.440930   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:29.624936   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:29.651923   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:29.941666   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:30.125377   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:30.155386   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:30.439262   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:30.622412   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:30.652500   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:30.940237   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:31.121774   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:31.152805   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:31.440346   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:31.624849   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:31.652015   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:31.940832   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:32.122443   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:32.153452   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:32.439942   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:32.622180   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:32.654041   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:32.940660   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:33.123459   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:33.226856   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:33.442281   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:33.622781   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:33.654760   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:34.008657   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:34.125562   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:34.154191   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:34.439399   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:34.621651   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:34.654303   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:34.940080   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:35.122605   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:35.152859   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:35.440998   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:35.622753   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:35.653525   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:35.940355   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:36.123478   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:36.202793   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:36.439806   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:36.624213   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:36.654420   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:37.126495   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:37.127583   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:37.153301   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:37.439811   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:37.622411   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:37.653133   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:37.940248   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:38.279446   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:38.280169   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:38.512203   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:38.668017   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:38.668802   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:38.941826   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:39.122238   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:39.151720   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:39.442997   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:39.624917   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:39.654914   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:39.940824   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:40.136854   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:40.202843   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:40.445823   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:40.623816   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:40.653827   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:40.941826   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:41.123468   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:41.154458   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:41.439997   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:41.622305   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:41.666321   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:41.941317   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:42.123578   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:42.153061   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:42.440348   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:42.621332   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:42.653342   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:42.940970   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:43.126979   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:43.153977   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:43.441352   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:43.621362   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:43.653146   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:43.940903   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:44.122456   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:44.205589   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:44.439446   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:44.622867   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:44.652294   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:44.940600   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:45.125166   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:45.153207   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:45.440206   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:45.622814   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:45.653544   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:45.940834   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:46.122275   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:46.153571   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:46.444673   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:46.623757   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:46.654391   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:46.940511   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:47.124537   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:47.153900   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:47.440141   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:47.621535   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:47.653018   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:47.939809   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:48.128348   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:48.201633   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:48.446601   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:48.622678   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:48.652325   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:48.939694   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:49.124438   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:49.432722   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:49.438628   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:49.621560   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:49.653611   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:49.942475   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:50.126287   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:50.154031   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:50.632832   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:50.633407   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:50.686714   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:50.946797   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:51.124016   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:51.163613   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:51.515650   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:51.629640   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:51.705660   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:51.944629   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:52.124633   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:52.202674   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:52.440777   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:52.623081   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:52.702645   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:52.940654   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:53.122656   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:53.152652   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:53.440681   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:53.624063   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:53.710536   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:54.004274   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:54.123258   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:54.202996   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:54.506166   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:54.622052   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:54.708768   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:54.940729   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:55.121924   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:55.203511   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:55.443674   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:55.623908   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:55.701104   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:56.001596   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:56.124337   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:56.153599   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:56.440973   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:56.623659   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:56.653903   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:56.940749   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:57.127990   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:57.153047   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:57.439598   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:57.623395   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:57.700079   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:57.941789   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:58.127834   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:58.154702   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:58.440343   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:58.622705   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:58.652909   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:58.941438   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:59.122610   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:59.199356   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:59.441083   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:00.130201   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:00.130814   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:00.130814   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:00.140901   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:00.154656   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:00.447560   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:00.622747   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:00.653781   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:00.940332   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:01.125228   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:01.153227   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:01.439715   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:01.622206   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:01.653205   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:01.946122   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:02.140382   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:02.154418   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:02.439902   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:02.621667   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:02.653409   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:02.941838   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:03.122152   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:03.153930   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:03.529433   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:03.621244   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:03.653558   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:03.942027   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:04.122013   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:04.153523   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:04.439803   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:04.623544   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:04.717168   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:04.939194   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:05.122943   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:05.153684   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:05.437167   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:05.720085   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:05.722166   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:05.943511   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:06.128284   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:06.211515   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:06.443930   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:06.623226   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:06.722164   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:06.938778   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:07.122915   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:07.152982   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:07.441711   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:07.623273   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:07.655240   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:07.940549   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:08.122285   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:08.154800   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:08.441037   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:08.627249   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:08.654904   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:08.940946   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:09.123273   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:09.152769   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:09.440061   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:09.623394   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:09.651927   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:09.939823   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:10.123141   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:10.153216   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:10.443049   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:10.710335   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:10.711742   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:10.944489   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:11.130086   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:11.153747   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:11.439253   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:11.628594   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:11.653016   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:11.941912   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:12.121860   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:12.154477   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:12.438909   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:12.622919   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:12.653920   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:12.940925   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:13.127920   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:13.153916   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:13.478189   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:13.622053   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:13.652276   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:13.939955   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:14.124121   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:14.153305   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:14.438867   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:14.623261   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:14.701524   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:15.008273   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:15.122103   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:15.153678   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:15.439659   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:15.622366   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:15.652921   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:15.939603   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:16.124215   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:16.163018   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:16.607725   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:16.621794   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:16.703200   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:16.942360   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:17.120094   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:17.159000   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:17.441249   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:17.622206   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:17.652209   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:17.940684   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:18.121693   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:18.154685   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:18.444299   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:18.623100   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:18.653068   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:18.946007   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:19.121458   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:19.153464   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:19.438968   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:19.621598   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:19.653532   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:19.941892   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:20.122118   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:20.153801   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:20.440143   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:20.624581   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:20.653143   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:20.939941   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:21.123611   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:21.153135   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:21.440943   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:21.622429   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:21.653155   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:22.076294   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:22.123152   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:22.156718   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:22.442551   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:22.622659   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:22.652291   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:22.940988   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:23.120656   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:23.153171   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:23.519915   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:23.622934   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:23.653917   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:23.940973   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:24.121899   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:24.156922   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:24.439913   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:24.621622   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:24.656127   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:24.939705   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:25.122909   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:25.200315   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:25.442279   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:25.623359   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:25.652962   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:25.939426   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:26.123017   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:26.155227   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:26.442225   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:26.712144   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:26.714965   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:26.944135   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:27.124868   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:27.227740   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:27.445504   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:27.621474   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:27.655303   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:27.937901   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:28.125107   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:28.154580   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:28.439582   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:28.621576   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:28.653581   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:28.940385   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:29.120360   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:29.153365   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:29.441346   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:29.620827   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:29.653445   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:29.939979   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:30.123068   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:30.154801   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:30.439469   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:30.622067   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:30.654570   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:30.939686   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:31.137726   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:31.153055   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:31.453177   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:31.624040   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:31.662346   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:31.944079   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:32.120680   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:32.152625   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:32.443782   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:32.622055   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:32.654062   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:32.939708   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:33.123680   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:33.164668   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:33.440476   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:33.623479   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:33.653459   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:33.939278   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:34.122423   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:34.153080   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:34.448020   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:34.622054   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:34.654122   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:34.940481   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:35.121464   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:35.159215   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:35.570214   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:35.622502   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:35.655041   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:35.942550   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:36.123696   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:36.153706   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:36.441557   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:36.621711   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:36.654617   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:36.943604   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:37.125596   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:37.154619   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:37.439617   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:37.622623   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:37.653602   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:37.939610   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:38.122616   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:38.154613   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:38.439053   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:38.622202   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:38.653262   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:38.939851   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:39.126823   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:39.156054   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:39.441063   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:39.623071   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:39.653091   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:39.941059   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:40.123473   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:40.153474   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:40.500448   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:40.622875   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:40.654508   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:40.940883   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:41.121332   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:41.152955   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:41.439772   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:41.622873   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:41.653249   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:41.939818   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:42.124287   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:42.153226   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:42.439812   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:42.621510   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:42.653094   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:42.940032   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:43.120924   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:43.154351   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:43.676826   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:43.677450   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:43.678340   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:44.175110   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:44.177220   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:44.177593   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:44.439649   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:44.623177   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:44.653488   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:44.940199   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:45.121574   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:45.152550   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:45.440957   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:45.624022   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:45.696803   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:45.942593   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:46.122694   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:46.153521   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:46.439376   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:46.621569   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:46.653617   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:46.948060   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:47.123225   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:47.150683   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:47.441360   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:47.621652   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:47.653310   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:47.939716   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:48.125822   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:48.225212   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:48.439265   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:48.622612   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:48.653371   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:48.939022   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:49.122383   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:49.152860   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:49.439305   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:49.621998   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:49.653197   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:49.939596   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:50.124655   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:50.196530   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:50.440782   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:50.621526   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:50.652343   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:50.940085   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:51.121955   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:51.153248   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:51.440738   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:51.620988   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:51.652409   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:51.939898   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:52.124301   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:52.152697   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:52.439475   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:52.622972   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:52.653387   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:52.938980   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:53.121977   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:53.152647   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:53.440098   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:53.922319   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:53.924346   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:53.938659   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:54.137960   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:54.152961   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:54.502086   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:54.625077   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:54.696769   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:54.998706   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:55.119649   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:55.153426   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:55.440350   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:55.623172   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:55.652523   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:13:55.940293   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:56.213328   11788 kapi.go:107] duration metric: took 3m21.0678398s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 11:13:56.213462   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:56.511541   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:56.624158   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:56.940814   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:57.123146   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:57.440662   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:57.623984   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:57.940213   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:58.123480   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:58.499230   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:58.623099   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:58.939458   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:59.124160   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:59.442079   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:59.624160   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:59.941132   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:00.122358   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:00.440253   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:00.623277   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:00.939248   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:01.120744   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:01.438748   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:01.629169   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:01.940501   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:02.125285   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:02.441528   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:02.627736   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:02.942491   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:03.121743   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:03.439892   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:03.623419   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:03.940784   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:04.130878   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:04.501245   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:04.621373   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:04.940531   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:05.121439   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:05.439425   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:05.621794   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:06.001860   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:06.198744   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:06.441043   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:06.623956   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:06.939799   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:07.127452   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:07.439738   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:07.623259   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:07.943205   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:08.122801   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:08.440861   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:08.623336   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:08.943909   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:09.122894   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:09.439579   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:09.621592   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:09.940573   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:10.123556   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:10.439701   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:10.624002   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:10.939077   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:11.121939   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:11.443588   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:11.622103   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:11.940576   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:12.232829   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:12.441236   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:12.622970   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:12.941091   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:13.130884   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:13.438203   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:13.622150   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:13.940108   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:14.126112   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:14.441399   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:14.621680   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:14.937706   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:15.125756   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:15.441077   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:15.622961   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:15.940316   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:16.123141   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:16.442179   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:16.622797   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:16.962007   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:17.123160   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:17.442480   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:17.626295   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:17.941094   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:18.213336   11788 kapi.go:107] duration metric: took 3m40.6009717s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 11:14:18.440099   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:18.938872   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:19.439861   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:19.992602   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:20.439760   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:20.939525   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:21.443680   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:21.939592   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:22.440322   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:22.939393   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:23.439485   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:23.941817   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:24.443699   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:24.940150   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:25.439916   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:25.939930   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:26.493899   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:26.943090   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:27.493161   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:27.940353   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:28.441256   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:28.947308   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:29.440377   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:29.940054   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:30.439342   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:30.939656   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:31.439641   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:31.940341   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:32.439624   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:32.939810   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:33.438862   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:33.940333   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:34.437785   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:34.940627   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:35.439466   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:35.939900   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:36.440705   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:36.941720   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:37.438952   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:37.939657   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:38.440422   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:38.941903   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:39.443499   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:39.946567   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:40.444435   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:40.942359   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:41.439945   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:41.942204   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:42.440741   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:42.945105   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:43.442571   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:43.939018   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:44.440573   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:45.003229   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:45.494644   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:46.097208   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:46.492567   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:46.942288   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:47.495116   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:47.991517   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:48.495660   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:48.995289   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:49.496639   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:49.998660   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:50.499245   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:50.940074   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:51.492815   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:51.940813   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:52.438745   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:52.939784   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:53.439609   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:53.939937   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:54.493203   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:54.938997   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:55.441106   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:55.941897   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:56.448181   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:56.939866   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:57.440830   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:57.939678   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:58.440106   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:58.939600   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:59.440434   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:59.993541   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:15:00.442014   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:15:00.940460   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:15:01.441096   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:15:01.942706   11788 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:15:02.443185   11788 kapi.go:107] duration metric: took 4m33.012352s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 11:16:10.282000   11788 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 11:16:10.282000   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:16:10.751632   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:16:11.252499   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:16:11.755114   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:16:12.253730   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:16:12.753948   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:16:13.281147   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:16:13.751762   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:16:14.253754   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:16:14.752122   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:16:15.253534   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:16:15.751777   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:16:16.282392   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:16:16.752372   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:16:17.253836   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:16:17.783821   11788 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:16:18.259528   11788 kapi.go:107] duration metric: took 5m36.0152336s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 11:16:18.261972   11788 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-827700 cluster.
	I0923 11:16:18.264582   11788 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 11:16:18.267032   11788 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 11:16:18.270133   11788 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, storage-provisioner-rancher, volcano, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0923 11:16:18.273721   11788 addons.go:510] duration metric: took 6m13.8037977s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner storage-provisioner-rancher volcano metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0923 11:16:18.273927   11788 start.go:246] waiting for cluster config update ...
	I0923 11:16:18.273927   11788 start.go:255] writing updated cluster config ...
	I0923 11:16:18.288677   11788 ssh_runner.go:195] Run: rm -f paused
	I0923 11:16:18.539713   11788 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 11:16:18.542368   11788 out.go:177] * Done! kubectl is now configured to use "addons-827700" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 23 11:26:18 addons-827700 dockerd[1377]: time="2024-09-23T11:26:18.931006907Z" level=info msg="ignoring event" container=12505f8ff68e7514eeb409134c579610edfbc4b9336076b6d27a8c88e2cb38cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:20 addons-827700 dockerd[1377]: time="2024-09-23T11:26:20.234014432Z" level=info msg="ignoring event" container=5d6d706b5fc32f14e1311c327c84c99398c16e98ce39ec168fd9872e097751c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:20 addons-827700 dockerd[1377]: time="2024-09-23T11:26:20.591883274Z" level=info msg="ignoring event" container=cee578f055f61901d3baaa57e4d732566970f1b6dab98edcc2dedf12a590c4f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:20 addons-827700 dockerd[1377]: time="2024-09-23T11:26:20.642937859Z" level=info msg="ignoring event" container=14d59aaa24aeb7236127075c03ab1704151d61b6b0318d0f54aec75a65d27439 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:21 addons-827700 dockerd[1377]: time="2024-09-23T11:26:21.132166127Z" level=info msg="ignoring event" container=d4f1d6c534a7d27b888faa76e680ca44e3e94b3f9e79c67f2fa288d8dadea150 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:21 addons-827700 dockerd[1377]: time="2024-09-23T11:26:21.563575449Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=817ef02fdda9bf4e traceID=951fb870506133ee6fbb03534fabbfcf
	Sep 23 11:26:21 addons-827700 dockerd[1377]: time="2024-09-23T11:26:21.628047444Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=817ef02fdda9bf4e traceID=951fb870506133ee6fbb03534fabbfcf
	Sep 23 11:26:21 addons-827700 dockerd[1377]: time="2024-09-23T11:26:21.644985998Z" level=info msg="ignoring event" container=d79e0d3310b38c8241b67d89cc72974192ea828a47eda970f9291fa14105347b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:24 addons-827700 dockerd[1377]: time="2024-09-23T11:26:24.561589474Z" level=info msg="ignoring event" container=5cd2b1e6f0187c877cd666d95ae27d764a70656cdce9b262aa535b55fb657398 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:24 addons-827700 dockerd[1377]: time="2024-09-23T11:26:24.833660938Z" level=info msg="ignoring event" container=f6c6e103cd5553807fb6018e692d420e75829401fc2fd232f420d858e476dbe5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:24 addons-827700 dockerd[1377]: time="2024-09-23T11:26:24.924653223Z" level=info msg="ignoring event" container=75b02257b6cc725675e79a468e24b534088d3a38aecd3e43baa198d2dd688fcb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:24 addons-827700 dockerd[1377]: time="2024-09-23T11:26:24.924784736Z" level=info msg="ignoring event" container=ad248797f3b90615ad1a9b27b6a4c5a4104b1401dd00940ae20ab08b38734fba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:24 addons-827700 dockerd[1377]: time="2024-09-23T11:26:24.924845142Z" level=info msg="ignoring event" container=8a87c6ab3a03820ef99beaa57561391a349800be50598342c02a4ab19b28841f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:24 addons-827700 dockerd[1377]: time="2024-09-23T11:26:24.925270783Z" level=info msg="ignoring event" container=59d6d7ade4d315bcc56529b6c26556d498b6521d1966452cc98fa8c2afee21b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:25 addons-827700 dockerd[1377]: time="2024-09-23T11:26:25.022724198Z" level=info msg="ignoring event" container=ed862e8679f678bf72199bf9f60c57e05d37c5174289a5d7c9a325e2dde20c08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:25 addons-827700 dockerd[1377]: time="2024-09-23T11:26:25.444414872Z" level=info msg="ignoring event" container=e243c480ec14e02b0b9eeb825f1c2e8a6a843d985f8f4acbfb182da0a0b66b10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:25 addons-827700 cri-dockerd[1649]: time="2024-09-23T11:26:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"csi-hostpath-attacher-0_kube-system\": unexpected command output nsenter: cannot open /proc/5771/ns/net: No such file or directory\n with error: exit status 1"
	Sep 23 11:26:25 addons-827700 dockerd[1377]: time="2024-09-23T11:26:25.694929532Z" level=info msg="ignoring event" container=60564596c1c733e226a16844a3ec5112155f47e42a2183fd94be09909ad0e06b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:25 addons-827700 dockerd[1377]: time="2024-09-23T11:26:25.788147534Z" level=info msg="ignoring event" container=797fd543d59b9a7b3e95826cd434666a67bc3cef03626fede11e946468b3c884 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:25 addons-827700 dockerd[1377]: time="2024-09-23T11:26:25.857892744Z" level=info msg="ignoring event" container=13226af3dc48be196ec73ce0e788ceef4bf2d8c629459d7c4b1428fd1cb673b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:32 addons-827700 dockerd[1377]: time="2024-09-23T11:26:32.425046281Z" level=info msg="ignoring event" container=4b54b53dfe19ab5617e016a249003a8e908d061e6238a7a4daa2f9dd752c81c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:32 addons-827700 dockerd[1377]: time="2024-09-23T11:26:32.451607368Z" level=info msg="ignoring event" container=367cfe51cf472617deb393849bd33c61f294f63a156c789b41ec3f515e55a3c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:32 addons-827700 dockerd[1377]: time="2024-09-23T11:26:32.914129702Z" level=info msg="ignoring event" container=b7cf860e2cc696436c3c25374ec8cb6589e2f2f94c7436cf988354b3b91b4df6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:33 addons-827700 dockerd[1377]: time="2024-09-23T11:26:33.023512052Z" level=info msg="ignoring event" container=375da66e2bc803e7a1a8d642951d238b59c8ed07c6158034ad2b8abdc5d165c8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:34 addons-827700 dockerd[1377]: time="2024-09-23T11:26:34.640377480Z" level=info msg="ignoring event" container=2a136adc317d4f2cdaae00be7b88ce99ad8c5784d73bf8e1be19830f41d8f07c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	12505f8ff68e7       a416a98b71e22                                                                                                                18 seconds ago      Exited              helper-pod                0                   d79e0d3310b38       helper-pod-delete-pvc-e0f6395d-66c4-4c42-9f5c-1478cb042762
	b97ed451eba05       busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140                                              24 seconds ago      Exited              busybox                   0                   e5b80e9c47f32       test-local-path
	5b19e93383355       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              31 seconds ago      Exited              helper-pod                0                   01d4536ddc06b       helper-pod-create-pvc-e0f6395d-66c4-4c42-9f5c-1478cb042762
	2579721a03bcc       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                32 seconds ago      Running             nginx                     0                   ef411fa197023       nginx
	259a329d453dd       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 10 minutes ago      Running             gcp-auth                  0                   235ac8a832a71       gcp-auth-89d5ffd79-w5qpz
	87f2094e5dfae       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                0                   c197edfc7c259       ingress-nginx-controller-bc57996ff-zs6bs
	c06ac9d9e11d1       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago      Running             registry-proxy            0                   363e7adec1144       registry-proxy-7l6xx
	438eb105d691d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   13 minutes ago      Exited              patch                     0                   579b058d7f8e8       ingress-nginx-admission-patch-wzdwg
	2b6b7cc914f26       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   13 minutes ago      Exited              create                    0                   789e57c345122       ingress-nginx-admission-create-cb9w8
	77592177e4ab3       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       14 minutes ago      Running             local-path-provisioner    0                   ccc7df2cee8ac       local-path-provisioner-86d989889c-9n56j
	d924f31b5568e       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                             14 minutes ago      Running             registry                  0                   b00189d4020d0       registry-66c9cd494c-kw8kk
	33206f6bf9849       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             14 minutes ago      Running             minikube-ingress-dns      0                   852edd11cd3d3       kube-ingress-dns-minikube
	409ca10fc0ac9       6e38f40d628db                                                                                                                16 minutes ago      Running             storage-provisioner       0                   eb26bd20f3390       storage-provisioner
	206ff329be032       c69fa2e9cbf5f                                                                                                                16 minutes ago      Running             coredns                   0                   7bc48699116d4       coredns-7c65d6cfc9-gntkt
	74b304643e8d5       60c005f310ff3                                                                                                                16 minutes ago      Running             kube-proxy                0                   5253efaac2310       kube-proxy-84526
	45b4a6fc2aa02       9aa1fad941575                                                                                                                16 minutes ago      Running             kube-scheduler            0                   6fb28e7d4cd4d       kube-scheduler-addons-827700
	6d4f39411282b       6bab7719df100                                                                                                                16 minutes ago      Running             kube-apiserver            0                   2442ebe4b332d       kube-apiserver-addons-827700
	2d5d22e85fb0a       2e96e5913fc06                                                                                                                16 minutes ago      Running             etcd                      0                   7c8414e2f62f9       etcd-addons-827700
	8311ef1556cb6       175ffd71cce3d                                                                                                                16 minutes ago      Running             kube-controller-manager   0                   1c8e3d7e9a7f9       kube-controller-manager-addons-827700
	
	
	==> controller_ingress [87f2094e5dfa] <==
	I0923 11:15:02.886600       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"05c1a71c-b8b8-4c02-8162-e9910eb2d3ff", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0923 11:15:04.089477       7 nginx.go:317] "Starting NGINX process"
	I0923 11:15:04.089614       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0923 11:15:04.090826       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0923 11:15:04.091311       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0923 11:15:04.109196       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0923 11:15:04.109450       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-zs6bs"
	I0923 11:15:04.115751       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-zs6bs" node="addons-827700"
	I0923 11:15:04.148743       7 controller.go:213] "Backend successfully reloaded"
	I0923 11:15:04.149000       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0923 11:15:04.149031       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-zs6bs", UID:"9515bd18-48e1-4731-beb1-835ca431eb10", APIVersion:"v1", ResourceVersion:"1527", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0923 11:25:54.308543       7 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0923 11:25:54.335737       7 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.027s renderingIngressLength:1 renderingIngressTime:0.001s admissionTime:0.028s testedConfigurationSize:18.1kB}
	I0923 11:25:54.335935       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0923 11:25:54.347641       7 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I0923 11:25:54.348414       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"b2b0b11b-f3bd-4256-835e-24a56d204530", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2866", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0923 11:25:55.244521       7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	I0923 11:25:55.245020       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0923 11:25:55.332458       7 controller.go:213] "Backend successfully reloaded"
	I0923 11:25:55.333173       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-zs6bs", UID:"9515bd18-48e1-4731-beb1-835ca431eb10", APIVersion:"v1", ResourceVersion:"1527", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0923 11:25:58.624080       7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	I0923 11:26:04.062481       7 status.go:304] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.49.2"}]
	W0923 11:26:04.078780       7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	I0923 11:26:04.078851       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"b2b0b11b-f3bd-4256-835e-24a56d204530", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2959", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	10.244.0.1 - - [23/Sep/2024:11:26:12 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.81.0" 81 0.001 [default-nginx-80] [] 10.244.0.31:80 615 0.001 200 7fea2ca1598148f307ca9163ba39cc3f
	
	
	==> coredns [206ff329be03] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] 10.244.0.7:38671 - 65179 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000442643s
	[INFO] 10.244.0.7:38671 - 10852 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000641862s
	[INFO] 10.244.0.7:52366 - 49029 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000351434s
	[INFO] 10.244.0.7:52366 - 55936 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000164316s
	[INFO] 10.244.0.7:33652 - 4338 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000354735s
	[INFO] 10.244.0.7:33652 - 39415 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000270926s
	[INFO] 10.244.0.7:41691 - 5844 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00021122s
	[INFO] 10.244.0.7:41691 - 20951 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000325431s
	[INFO] 10.244.0.7:59408 - 42492 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000164516s
	[INFO] 10.244.0.7:59408 - 55521 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000334332s
	[INFO] 10.244.0.7:42767 - 20230 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000218021s
	[INFO] 10.244.0.7:42767 - 11269 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000425542s
	[INFO] 10.244.0.7:49632 - 22165 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000146714s
	[INFO] 10.244.0.7:49632 - 3728 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00041604s
	[INFO] 10.244.0.7:51749 - 4117 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000564054s
	[INFO] 10.244.0.7:51749 - 13078 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00062426s
	[INFO] 10.244.0.25:43189 - 46089 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.010875664s
	[INFO] 10.244.0.25:34614 - 3312 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.01133651s
	[INFO] 10.244.0.25:52627 - 49798 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000432343s
	[INFO] 10.244.0.25:34776 - 57310 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000734972s
	[INFO] 10.244.0.25:46273 - 53427 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000414741s
	[INFO] 10.244.0.25:46065 - 20509 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000661864s
	[INFO] 10.244.0.25:35899 - 60717 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 230 0.064147779s
	[INFO] 10.244.0.25:55637 - 25928 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.064320796s
	
	
	==> describe nodes <==
	Name:               addons-827700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-827700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=addons-827700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T11_09_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-827700
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 11:09:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-827700
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 11:26:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 11:26:09 +0000   Mon, 23 Sep 2024 11:09:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 11:26:09 +0000   Mon, 23 Sep 2024 11:09:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 11:26:09 +0000   Mon, 23 Sep 2024 11:09:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 11:26:09 +0000   Mon, 23 Sep 2024 11:09:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-827700
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868688Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868688Ki
	  pods:               110
	System Info:
	  Machine ID:                 591f941ea8f34716bd0910d2929e11a6
	  System UUID:                591f941ea8f34716bd0910d2929e11a6
	  Boot ID:                    39082465-ae0b-4792-bc81-a99f7997c7d1
	  Kernel Version:             5.15.153.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m23s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  gcp-auth                    gcp-auth-89d5ffd79-w5qpz                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-zs6bs    100m (0%)     0 (0%)      90Mi (0%)        0 (0%)         16m
	  kube-system                 coredns-7c65d6cfc9-gntkt                    100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     16m
	  kube-system                 etcd-addons-827700                          100m (0%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kube-apiserver-addons-827700                250m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-addons-827700       200m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-84526                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-addons-827700                100m (0%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 registry-66c9cd494c-kw8kk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 registry-proxy-7l6xx                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  local-path-storage          local-path-provisioner-86d989889c-9n56j     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (5%)   0 (0%)
	  memory             260Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                             Age   From             Message
	  ----     ------                             ----  ----             -------
	  Normal   Starting                           16m   kube-proxy       
	  Warning  PossibleMemoryBackedVolumesOnDisk  16m   kubelet          The tmpfs noswap option is not supported. Memory-backed volumes (e.g. secrets, emptyDirs, etc.) might be swapped to disk and should no longer be considered secure.
	  Normal   Starting                           16m   kubelet          Starting kubelet.
	  Warning  CgroupV1                           16m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced            16m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory            16m   kubelet          Node addons-827700 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure              16m   kubelet          Node addons-827700 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID               16m   kubelet          Node addons-827700 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode                     16m   node-controller  Node addons-827700 event: Registered Node addons-827700 in Controller
	
	
	==> dmesg <==
	[  +0.504499] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +3.300766] FS-Cache: Duplicate cookie detected
	[  +0.001174] FS-Cache: O-cookie c=00000010 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001836] FS-Cache: O-cookie d=00000000b0d5c2d6{9P.session} n=00000000522d2314
	[  +0.001778] FS-Cache: O-key=[10] '34323934393337393534'
	[  +0.001288] FS-Cache: N-cookie c=00000011 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001302] FS-Cache: N-cookie d=00000000b0d5c2d6{9P.session} n=00000000b670bc28
	[  +0.001551] FS-Cache: N-key=[10] '34323934393337393534'
	[  +0.011137] WSL (2) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002189] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.002737] WSL (1) ERROR: ConfigMountFsTab:2589: Processing fstab with mount -a failed.
	[  +0.003596] WSL (1) ERROR: ConfigApplyWindowsLibPath:2537: open /etc/ld.so.conf.d/ld.wsl.conf
	[  +0.000003]  failed 2
	[  +0.007837] WSL (3) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002428] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.004751] WSL (4) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002443] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.077675] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.098047] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +0.909307] netlink: 'init': attribute type 4 has an invalid length.
	[Sep23 11:09] tmpfs: Unknown parameter 'noswap'
	[  +9.754246] tmpfs: Unknown parameter 'noswap'
	
	
	==> etcd [2d5d22e85fb0] <==
	{"level":"warn","ts":"2024-09-23T11:25:40.004949Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.661772ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128032087506949215 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-33iebyujlbse5b4zwjxvjaz3ny\" mod_revision:2689 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-33iebyujlbse5b4zwjxvjaz3ny\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-33iebyujlbse5b4zwjxvjaz3ny\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-23T11:25:40.005236Z","caller":"traceutil/trace.go:171","msg":"trace[634030305] transaction","detail":"{read_only:false; response_revision:2745; number_of_response:1; }","duration":"175.021153ms","start":"2024-09-23T11:25:39.830196Z","end":"2024-09-23T11:25:40.005217Z","steps":["trace[634030305] 'process raft request'  (duration: 48.939241ms)","trace[634030305] 'compare'  (duration: 125.477353ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T11:25:40.283045Z","caller":"traceutil/trace.go:171","msg":"trace[1026891102] linearizableReadLoop","detail":"{readStateIndex:2970; appliedIndex:2969; }","duration":"152.826203ms","start":"2024-09-23T11:25:40.130202Z","end":"2024-09-23T11:25:40.283028Z","steps":["trace[1026891102] 'read index received'  (duration: 152.47877ms)","trace[1026891102] 'applied index is now lower than readState.Index'  (duration: 346.433µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T11:25:40.283288Z","caller":"traceutil/trace.go:171","msg":"trace[2003789317] transaction","detail":"{read_only:false; response_revision:2746; number_of_response:1; }","duration":"158.638066ms","start":"2024-09-23T11:25:40.124635Z","end":"2024-09-23T11:25:40.283273Z","steps":["trace[2003789317] 'process raft request'  (duration: 158.081812ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:25:40.283339Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.120331ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-09-23T11:25:40.283381Z","caller":"traceutil/trace.go:171","msg":"trace[1344021968] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:2746; }","duration":"153.175737ms","start":"2024-09-23T11:25:40.130197Z","end":"2024-09-23T11:25:40.283373Z","steps":["trace[1344021968] 'agreement among raft nodes before linearized reading'  (duration: 152.996219ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T11:25:55.825223Z","caller":"traceutil/trace.go:171","msg":"trace[362168449] transaction","detail":"{read_only:false; response_revision:2881; number_of_response:1; }","duration":"191.792543ms","start":"2024-09-23T11:25:55.633371Z","end":"2024-09-23T11:25:55.825163Z","steps":["trace[362168449] 'process raft request'  (duration: 191.05577ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T11:25:55.843874Z","caller":"traceutil/trace.go:171","msg":"trace[8460361] transaction","detail":"{read_only:false; response_revision:2882; number_of_response:1; }","duration":"118.389117ms","start":"2024-09-23T11:25:55.725471Z","end":"2024-09-23T11:25:55.843860Z","steps":["trace[8460361] 'process raft request'  (duration: 111.998479ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:26:04.371742Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.77021ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-e0f6395d-66c4-4c42-9f5c-1478cb042762\" ","response":"range_response_count:1 size:4396"}
	{"level":"info","ts":"2024-09-23T11:26:04.371893Z","caller":"traceutil/trace.go:171","msg":"trace[785513673] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-e0f6395d-66c4-4c42-9f5c-1478cb042762; range_end:; response_count:1; response_revision:2961; }","duration":"105.937827ms","start":"2024-09-23T11:26:04.265939Z","end":"2024-09-23T11:26:04.371876Z","steps":["trace[785513673] 'range keys from in-memory index tree'  (duration: 105.617995ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T11:26:09.826125Z","caller":"traceutil/trace.go:171","msg":"trace[335539101] transaction","detail":"{read_only:false; response_revision:2995; number_of_response:1; }","duration":"101.930735ms","start":"2024-09-23T11:26:09.724168Z","end":"2024-09-23T11:26:09.826099Z","steps":["trace[335539101] 'process raft request'  (duration: 100.94804ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T11:26:24.738949Z","caller":"traceutil/trace.go:171","msg":"trace[29906203] transaction","detail":"{read_only:false; response_revision:3161; number_of_response:1; }","duration":"100.486912ms","start":"2024-09-23T11:26:24.638438Z","end":"2024-09-23T11:26:24.738925Z","steps":["trace[29906203] 'process raft request'  (duration: 100.200184ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:26:25.021440Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.226559ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/kube-system/csi-hostpath-resizer-dd9fcd54\" ","response":"range_response_count:1 size:3027"}
	{"level":"info","ts":"2024-09-23T11:26:25.021610Z","caller":"traceutil/trace.go:171","msg":"trace[1040234679] range","detail":"{range_begin:/registry/controllerrevisions/kube-system/csi-hostpath-resizer-dd9fcd54; range_end:; response_count:1; response_revision:3164; }","duration":"196.323769ms","start":"2024-09-23T11:26:24.825192Z","end":"2024-09-23T11:26:25.021516Z","steps":["trace[1040234679] 'range keys from in-memory index tree'  (duration: 184.421106ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:26:25.021433Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.055745ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/csi-hostpath-resizer-0\" ","response":"range_response_count:1 size:4041"}
	{"level":"info","ts":"2024-09-23T11:26:25.021709Z","caller":"traceutil/trace.go:171","msg":"trace[896007200] range","detail":"{range_begin:/registry/pods/kube-system/csi-hostpath-resizer-0; range_end:; response_count:1; response_revision:3164; }","duration":"195.327971ms","start":"2024-09-23T11:26:24.826350Z","end":"2024-09-23T11:26:25.021678Z","steps":["trace[896007200] 'range keys from in-memory index tree'  (duration: 184.465511ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T11:26:25.142554Z","caller":"traceutil/trace.go:171","msg":"trace[691452082] linearizableReadLoop","detail":"{readStateIndex:3405; appliedIndex:3403; }","duration":"117.946416ms","start":"2024-09-23T11:26:25.024541Z","end":"2024-09-23T11:26:25.142487Z","steps":["trace[691452082] 'read index received'  (duration: 96.299903ms)","trace[691452082] 'applied index is now lower than readState.Index'  (duration: 21.645613ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T11:26:25.142724Z","caller":"traceutil/trace.go:171","msg":"trace[900859019] transaction","detail":"{read_only:false; response_revision:3167; number_of_response:1; }","duration":"119.00952ms","start":"2024-09-23T11:26:25.023682Z","end":"2024-09-23T11:26:25.142692Z","steps":["trace[900859019] 'process raft request'  (duration: 118.670887ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:26:25.142885Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.326253ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T11:26:25.142934Z","caller":"traceutil/trace.go:171","msg":"trace[1368157092] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:3167; }","duration":"118.38606ms","start":"2024-09-23T11:26:25.024536Z","end":"2024-09-23T11:26:25.142922Z","steps":["trace[1368157092] 'agreement among raft nodes before linearized reading'  (duration: 118.306152ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:26:25.143047Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.639386ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/external-attacher-runner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T11:26:25.143535Z","caller":"traceutil/trace.go:171","msg":"trace[2121727530] range","detail":"{range_begin:/registry/clusterroles/external-attacher-runner; range_end:; response_count:0; response_revision:3167; }","duration":"118.127734ms","start":"2024-09-23T11:26:25.025390Z","end":"2024-09-23T11:26:25.143518Z","steps":["trace[2121727530] 'agreement among raft nodes before linearized reading'  (duration: 117.528975ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:26:25.143043Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.660388ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-827700\" ","response":"range_response_count:1 size:7744"}
	{"level":"info","ts":"2024-09-23T11:26:25.143694Z","caller":"traceutil/trace.go:171","msg":"trace[227779118] range","detail":"{range_begin:/registry/minions/addons-827700; range_end:; response_count:1; response_revision:3167; }","duration":"118.408461ms","start":"2024-09-23T11:26:25.025268Z","end":"2024-09-23T11:26:25.143677Z","steps":["trace[227779118] 'agreement among raft nodes before linearized reading'  (duration: 117.541377ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T11:26:26.221115Z","caller":"traceutil/trace.go:171","msg":"trace[1555411876] transaction","detail":"{read_only:false; response_revision:3173; number_of_response:1; }","duration":"118.013923ms","start":"2024-09-23T11:26:26.103075Z","end":"2024-09-23T11:26:26.221089Z","steps":["trace[1555411876] 'process raft request'  (duration: 117.889911ms)"],"step_count":1}
	
	
	==> gcp-auth [259a329d453d] <==
	2024/09/23 11:17:14 Ready to write response ...
	2024/09/23 11:17:14 Ready to marshal response ...
	2024/09/23 11:17:14 Ready to write response ...
	2024/09/23 11:17:14 Ready to marshal response ...
	2024/09/23 11:17:14 Ready to write response ...
	2024/09/23 11:25:24 Ready to marshal response ...
	2024/09/23 11:25:24 Ready to write response ...
	2024/09/23 11:25:24 Ready to marshal response ...
	2024/09/23 11:25:24 Ready to write response ...
	2024/09/23 11:25:24 Ready to marshal response ...
	2024/09/23 11:25:24 Ready to write response ...
	2024/09/23 11:25:34 Ready to marshal response ...
	2024/09/23 11:25:34 Ready to write response ...
	2024/09/23 11:25:47 Ready to marshal response ...
	2024/09/23 11:25:47 Ready to write response ...
	2024/09/23 11:25:54 Ready to marshal response ...
	2024/09/23 11:25:54 Ready to write response ...
	2024/09/23 11:25:58 Ready to marshal response ...
	2024/09/23 11:25:58 Ready to write response ...
	2024/09/23 11:25:59 Ready to marshal response ...
	2024/09/23 11:25:59 Ready to write response ...
	2024/09/23 11:26:11 Ready to marshal response ...
	2024/09/23 11:26:11 Ready to write response ...
	2024/09/23 11:26:17 Ready to marshal response ...
	2024/09/23 11:26:17 Ready to write response ...
	
	
	==> kernel <==
	 11:26:37 up 27 min,  0 users,  load average: 2.32, 1.34, 1.12
	Linux addons-827700 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [6d4f39411282] <==
	W0923 11:17:06.972072       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0923 11:17:07.172054       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0923 11:17:07.591235       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0923 11:17:08.310292       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0923 11:25:24.453317       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.212.84"}
	I0923 11:25:52.079557       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0923 11:25:53.241315       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0923 11:25:54.337333       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0923 11:25:54.813474       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.103.232"}
	I0923 11:25:58.525926       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0923 11:25:59.123905       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0923 11:26:32.011212       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 11:26:32.011441       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 11:26:32.042355       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 11:26:32.042501       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 11:26:32.075621       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 11:26:32.075803       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 11:26:32.141070       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 11:26:32.141278       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 11:26:32.146631       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 11:26:32.146754       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0923 11:26:33.141512       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0923 11:26:33.148202       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0923 11:26:33.153096       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0923 11:26:34.298222       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [8311ef1556cb] <==
	E0923 11:26:33.144498       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0923 11:26:33.150811       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0923 11:26:33.155584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 11:26:33.982186       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:26:33.982294       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 11:26:34.245194       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:26:34.245343       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 11:26:34.453105       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:26:34.453257       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 11:26:34.539857       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:26:34.539985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 11:26:35.355277       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:26:35.355428       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 11:26:35.557746       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:26:35.557863       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 11:26:35.849369       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0923 11:26:35.849543       1 shared_informer.go:320] Caches are synced for resource quota
	I0923 11:26:36.041378       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0923 11:26:36.041488       1 shared_informer.go:320] Caches are synced for garbage collector
	W0923 11:26:36.615220       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:26:36.615329       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 11:26:37.118887       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:26:37.119095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 11:26:37.335080       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:26:37.335199       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [74b304643e8d] <==
	E0923 11:10:21.922574       1 metrics.go:340] "failed to initialize nfacct client" err="nfacct sub-system not available"
	E0923 11:10:21.948365       1 metrics.go:340] "failed to initialize nfacct client" err="nfacct sub-system not available"
	I0923 11:10:22.233379       1 server_linux.go:66] "Using iptables proxy"
	I0923 11:10:24.732371       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 11:10:24.732969       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 11:10:26.011496       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 11:10:26.011684       1 server_linux.go:169] "Using iptables Proxier"
	I0923 11:10:26.126021       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	E0923 11:10:26.211201       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv4"
	E0923 11:10:26.311165       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv6"
	I0923 11:10:26.311556       1 server.go:483] "Version info" version="v1.31.1"
	I0923 11:10:26.311611       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:10:26.413503       1 config.go:105] "Starting endpoint slice config controller"
	I0923 11:10:26.414456       1 config.go:199] "Starting service config controller"
	I0923 11:10:26.415305       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 11:10:26.415835       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 11:10:26.419040       1 config.go:328] "Starting node config controller"
	I0923 11:10:26.421662       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 11:10:26.516908       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 11:10:26.517333       1 shared_informer.go:320] Caches are synced for service config
	I0923 11:10:26.522643       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [45b4a6fc2aa0] <==
	W0923 11:09:56.823891       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 11:09:56.824053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 11:09:56.871956       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 11:09:56.872085       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:09:56.886276       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 11:09:56.886446       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:09:56.911755       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 11:09:56.911787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:09:56.916650       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 11:09:56.916885       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:09:57.017305       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 11:09:57.017406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 11:09:57.069256       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 11:09:57.069300       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 11:09:57.165661       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 11:09:57.165770       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:09:57.187695       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 11:09:57.187846       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:09:57.198201       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 11:09:57.198304       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 11:09:57.294029       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 11:09:57.294185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:09:57.299741       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 11:09:57.299893       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0923 11:09:59.839510       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 11:26:27 addons-827700 kubelet[2576]: I0923 11:26:27.163771    2576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"59d6d7ade4d315bcc56529b6c26556d498b6521d1966452cc98fa8c2afee21b7"} err="failed to get container status \"59d6d7ade4d315bcc56529b6c26556d498b6521d1966452cc98fa8c2afee21b7\": rpc error: code = Unknown desc = Error response from daemon: No such container: 59d6d7ade4d315bcc56529b6c26556d498b6521d1966452cc98fa8c2afee21b7"
	Sep 23 11:26:27 addons-827700 kubelet[2576]: I0923 11:26:27.163952    2576 scope.go:117] "RemoveContainer" containerID="75b02257b6cc725675e79a468e24b534088d3a38aecd3e43baa198d2dd688fcb"
	Sep 23 11:26:27 addons-827700 kubelet[2576]: I0923 11:26:27.166196    2576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"75b02257b6cc725675e79a468e24b534088d3a38aecd3e43baa198d2dd688fcb"} err="failed to get container status \"75b02257b6cc725675e79a468e24b534088d3a38aecd3e43baa198d2dd688fcb\": rpc error: code = Unknown desc = Error response from daemon: No such container: 75b02257b6cc725675e79a468e24b534088d3a38aecd3e43baa198d2dd688fcb"
	Sep 23 11:26:27 addons-827700 kubelet[2576]: I0923 11:26:27.166296    2576 scope.go:117] "RemoveContainer" containerID="e243c480ec14e02b0b9eeb825f1c2e8a6a843d985f8f4acbfb182da0a0b66b10"
	Sep 23 11:26:27 addons-827700 kubelet[2576]: I0923 11:26:27.208710    2576 scope.go:117] "RemoveContainer" containerID="e243c480ec14e02b0b9eeb825f1c2e8a6a843d985f8f4acbfb182da0a0b66b10"
	Sep 23 11:26:27 addons-827700 kubelet[2576]: E0923 11:26:27.222858    2576 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: e243c480ec14e02b0b9eeb825f1c2e8a6a843d985f8f4acbfb182da0a0b66b10" containerID="e243c480ec14e02b0b9eeb825f1c2e8a6a843d985f8f4acbfb182da0a0b66b10"
	Sep 23 11:26:27 addons-827700 kubelet[2576]: I0923 11:26:27.222964    2576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"e243c480ec14e02b0b9eeb825f1c2e8a6a843d985f8f4acbfb182da0a0b66b10"} err="failed to get container status \"e243c480ec14e02b0b9eeb825f1c2e8a6a843d985f8f4acbfb182da0a0b66b10\": rpc error: code = Unknown desc = Error response from daemon: No such container: e243c480ec14e02b0b9eeb825f1c2e8a6a843d985f8f4acbfb182da0a0b66b10"
	Sep 23 11:26:28 addons-827700 kubelet[2576]: E0923 11:26:28.145248    2576 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="6f3c4160-beb8-4c2d-a3d3-9af0d336c720"
	Sep 23 11:26:33 addons-827700 kubelet[2576]: I0923 11:26:33.184190    2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npclm\" (UniqueName: \"kubernetes.io/projected/f48dba33-f6a3-4145-9b85-2d8be9e3f9fb-kube-api-access-npclm\") pod \"f48dba33-f6a3-4145-9b85-2d8be9e3f9fb\" (UID: \"f48dba33-f6a3-4145-9b85-2d8be9e3f9fb\") "
	Sep 23 11:26:33 addons-827700 kubelet[2576]: I0923 11:26:33.187486    2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f48dba33-f6a3-4145-9b85-2d8be9e3f9fb-kube-api-access-npclm" (OuterVolumeSpecName: "kube-api-access-npclm") pod "f48dba33-f6a3-4145-9b85-2d8be9e3f9fb" (UID: "f48dba33-f6a3-4145-9b85-2d8be9e3f9fb"). InnerVolumeSpecName "kube-api-access-npclm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 11:26:33 addons-827700 kubelet[2576]: I0923 11:26:33.285392    2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx2z5\" (UniqueName: \"kubernetes.io/projected/3d8eee02-a810-4ebb-a0fa-ec8c15b7f653-kube-api-access-gx2z5\") pod \"3d8eee02-a810-4ebb-a0fa-ec8c15b7f653\" (UID: \"3d8eee02-a810-4ebb-a0fa-ec8c15b7f653\") "
	Sep 23 11:26:33 addons-827700 kubelet[2576]: I0923 11:26:33.285578    2576 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-npclm\" (UniqueName: \"kubernetes.io/projected/f48dba33-f6a3-4145-9b85-2d8be9e3f9fb-kube-api-access-npclm\") on node \"addons-827700\" DevicePath \"\""
	Sep 23 11:26:33 addons-827700 kubelet[2576]: I0923 11:26:33.288946    2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d8eee02-a810-4ebb-a0fa-ec8c15b7f653-kube-api-access-gx2z5" (OuterVolumeSpecName: "kube-api-access-gx2z5") pod "3d8eee02-a810-4ebb-a0fa-ec8c15b7f653" (UID: "3d8eee02-a810-4ebb-a0fa-ec8c15b7f653"). InnerVolumeSpecName "kube-api-access-gx2z5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 11:26:33 addons-827700 kubelet[2576]: I0923 11:26:33.386004    2576 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gx2z5\" (UniqueName: \"kubernetes.io/projected/3d8eee02-a810-4ebb-a0fa-ec8c15b7f653-kube-api-access-gx2z5\") on node \"addons-827700\" DevicePath \"\""
	Sep 23 11:26:34 addons-827700 kubelet[2576]: I0923 11:26:34.138957    2576 scope.go:117] "RemoveContainer" containerID="4b54b53dfe19ab5617e016a249003a8e908d061e6238a7a4daa2f9dd752c81c2"
	Sep 23 11:26:34 addons-827700 kubelet[2576]: I0923 11:26:34.192853    2576 scope.go:117] "RemoveContainer" containerID="367cfe51cf472617deb393849bd33c61f294f63a156c789b41ec3f515e55a3c6"
	Sep 23 11:26:34 addons-827700 kubelet[2576]: I0923 11:26:34.897908    2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mbh6\" (UniqueName: \"kubernetes.io/projected/b67cd95e-83df-4584-b4ad-93425d0ea3e5-kube-api-access-6mbh6\") pod \"b67cd95e-83df-4584-b4ad-93425d0ea3e5\" (UID: \"b67cd95e-83df-4584-b4ad-93425d0ea3e5\") "
	Sep 23 11:26:34 addons-827700 kubelet[2576]: I0923 11:26:34.898095    2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b67cd95e-83df-4584-b4ad-93425d0ea3e5-gcp-creds\") pod \"b67cd95e-83df-4584-b4ad-93425d0ea3e5\" (UID: \"b67cd95e-83df-4584-b4ad-93425d0ea3e5\") "
	Sep 23 11:26:34 addons-827700 kubelet[2576]: I0923 11:26:34.898306    2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b67cd95e-83df-4584-b4ad-93425d0ea3e5-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "b67cd95e-83df-4584-b4ad-93425d0ea3e5" (UID: "b67cd95e-83df-4584-b4ad-93425d0ea3e5"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 23 11:26:34 addons-827700 kubelet[2576]: I0923 11:26:34.904103    2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b67cd95e-83df-4584-b4ad-93425d0ea3e5-kube-api-access-6mbh6" (OuterVolumeSpecName: "kube-api-access-6mbh6") pod "b67cd95e-83df-4584-b4ad-93425d0ea3e5" (UID: "b67cd95e-83df-4584-b4ad-93425d0ea3e5"). InnerVolumeSpecName "kube-api-access-6mbh6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 11:26:34 addons-827700 kubelet[2576]: I0923 11:26:34.999661    2576 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6mbh6\" (UniqueName: \"kubernetes.io/projected/b67cd95e-83df-4584-b4ad-93425d0ea3e5-kube-api-access-6mbh6\") on node \"addons-827700\" DevicePath \"\""
	Sep 23 11:26:34 addons-827700 kubelet[2576]: I0923 11:26:34.999774    2576 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b67cd95e-83df-4584-b4ad-93425d0ea3e5-gcp-creds\") on node \"addons-827700\" DevicePath \"\""
	Sep 23 11:26:35 addons-827700 kubelet[2576]: I0923 11:26:35.160922    2576 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d8eee02-a810-4ebb-a0fa-ec8c15b7f653" path="/var/lib/kubelet/pods/3d8eee02-a810-4ebb-a0fa-ec8c15b7f653/volumes"
	Sep 23 11:26:35 addons-827700 kubelet[2576]: I0923 11:26:35.161728    2576 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f48dba33-f6a3-4145-9b85-2d8be9e3f9fb" path="/var/lib/kubelet/pods/f48dba33-f6a3-4145-9b85-2d8be9e3f9fb/volumes"
	Sep 23 11:26:37 addons-827700 kubelet[2576]: I0923 11:26:37.150681    2576 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b67cd95e-83df-4584-b4ad-93425d0ea3e5" path="/var/lib/kubelet/pods/b67cd95e-83df-4584-b4ad-93425d0ea3e5/volumes"
	
	
	==> storage-provisioner [409ca10fc0ac] <==
	I0923 11:10:28.741225       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 11:10:29.112018       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 11:10:29.112457       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 11:10:29.315213       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 11:10:29.315613       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-827700_52711129-7f02-40bf-aa78-60c9516470eb!
	I0923 11:10:29.316706       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7d546179-e3ed-420f-baff-27ef1edd03a4", APIVersion:"v1", ResourceVersion:"774", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-827700_52711129-7f02-40bf-aa78-60c9516470eb became leader
	I0923 11:10:29.517368       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-827700_52711129-7f02-40bf-aa78-60c9516470eb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-827700 -n addons-827700
helpers_test.go:261: (dbg) Run:  kubectl --context addons-827700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-cb9w8 ingress-nginx-admission-patch-wzdwg
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-827700 describe pod busybox ingress-nginx-admission-create-cb9w8 ingress-nginx-admission-patch-wzdwg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-827700 describe pod busybox ingress-nginx-admission-create-cb9w8 ingress-nginx-admission-patch-wzdwg: exit status 1 (267.75ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-827700/192.168.49.2
	Start Time:       Mon, 23 Sep 2024 11:17:14 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2rbkz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2rbkz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m26s                   default-scheduler  Successfully assigned default/busybox to addons-827700
	  Warning  Failed     8m7s (x6 over 9m24s)    kubelet            Error: ImagePullBackOff
	  Normal   Pulling    7m55s (x4 over 9m25s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m55s (x4 over 9m25s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m55s (x4 over 9m25s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m18s (x22 over 9m24s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-cb9w8" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-wzdwg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-827700 describe pod busybox ingress-nginx-admission-create-cb9w8 ingress-nginx-admission-patch-wzdwg: exit status 1
--- FAIL: TestAddons/parallel/Registry (77.58s)

                                                
                                    
x
+
TestErrorSpam/setup (64.51s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-111700 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-111700 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 --driver=docker: (1m4.5122364s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube container"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-111700] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
- KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
- MINIKUBE_LOCATION=19690
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting "nospam-111700" primary control-plane node in "nospam-111700" cluster
* Pulling base image v0.0.45-1726784731-19672 ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-111700" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube container
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (64.51s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (5.18s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:735: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-716900
helpers_test.go:235: (dbg) docker inspect functional-716900:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99ca7d137dc3c3558ffaa72adc51267e3bb4ba7363e5a7050af3e7f35b36fcb8",
	        "Created": "2024-09-23T11:29:31.056471771Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 26668,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T11:29:31.389979782Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:d94335c0cd164ddebb3c5158e317bcf6d2e08dc08f448d25251f425acb842829",
	        "ResolvConfPath": "/var/lib/docker/containers/99ca7d137dc3c3558ffaa72adc51267e3bb4ba7363e5a7050af3e7f35b36fcb8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99ca7d137dc3c3558ffaa72adc51267e3bb4ba7363e5a7050af3e7f35b36fcb8/hostname",
	        "HostsPath": "/var/lib/docker/containers/99ca7d137dc3c3558ffaa72adc51267e3bb4ba7363e5a7050af3e7f35b36fcb8/hosts",
	        "LogPath": "/var/lib/docker/containers/99ca7d137dc3c3558ffaa72adc51267e3bb4ba7363e5a7050af3e7f35b36fcb8/99ca7d137dc3c3558ffaa72adc51267e3bb4ba7363e5a7050af3e7f35b36fcb8-json.log",
	        "Name": "/functional-716900",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-716900:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-716900",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b44163154acbb5211d0b671a149d35d7907f1e75b7af4709540aef5daf31b40a-init/diff:/var/lib/docker/overlay2/c7287d3444125b9a8090b921db98cb6ed8be2d7a048d39cf2a791cb2793d7251/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b44163154acbb5211d0b671a149d35d7907f1e75b7af4709540aef5daf31b40a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b44163154acbb5211d0b671a149d35d7907f1e75b7af4709540aef5daf31b40a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b44163154acbb5211d0b671a149d35d7907f1e75b7af4709540aef5daf31b40a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-716900",
	                "Source": "/var/lib/docker/volumes/functional-716900/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-716900",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-716900",
	                "name.minikube.sigs.k8s.io": "functional-716900",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "027ea45f239cee1e51cf304ff5f2ccbb8abc56bc20f03aa3b96d6b1b1871d194",
	            "SandboxKey": "/var/run/docker/netns/027ea45f239c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54336"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54337"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54338"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54334"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54335"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-716900": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "396e2b9e328470ad967ada0f34bc817ce88025c745e478ae061b91c50cde51db",
	                    "EndpointID": "09795715a21da6c7a5288621c8f7f3ab29d8448f43927178c40846de36825802",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-716900",
	                        "99ca7d137dc3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-716900 -n functional-716900
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-716900 logs -n 25: (2.4202236s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-111700 --log_dir                                     | nospam-111700     | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:28 UTC | 23 Sep 24 11:28 UTC |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-111700 --log_dir                                     | nospam-111700     | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:28 UTC | 23 Sep 24 11:28 UTC |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-111700 --log_dir                                     | nospam-111700     | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:28 UTC | 23 Sep 24 11:28 UTC |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-111700 --log_dir                                     | nospam-111700     | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:28 UTC | 23 Sep 24 11:28 UTC |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-111700 --log_dir                                     | nospam-111700     | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:28 UTC | 23 Sep 24 11:28 UTC |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-111700 --log_dir                                     | nospam-111700     | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:28 UTC | 23 Sep 24 11:28 UTC |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-111700 --log_dir                                     | nospam-111700     | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:28 UTC | 23 Sep 24 11:28 UTC |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-111700                                            | nospam-111700     | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:29 UTC | 23 Sep 24 11:29 UTC |
	| start   | -p functional-716900                                        | functional-716900 | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:29 UTC | 23 Sep 24 11:30 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=docker                                  |                   |                   |         |                     |                     |
	| start   | -p functional-716900                                        | functional-716900 | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:30 UTC | 23 Sep 24 11:31 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-716900 cache add                                 | functional-716900 | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:31 UTC | 23 Sep 24 11:31 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-716900 cache add                                 | functional-716900 | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:31 UTC | 23 Sep 24 11:31 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-716900 cache add                                 | functional-716900 | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:31 UTC | 23 Sep 24 11:31 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-716900 cache add                                 | functional-716900 | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:31 UTC | 23 Sep 24 11:31 UTC |
	|         | minikube-local-cache-test:functional-716900                 |                   |                   |         |                     |                     |
	| cache   | functional-716900 cache delete                              | functional-716900 | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:31 UTC | 23 Sep 24 11:31 UTC |
	|         | minikube-local-cache-test:functional-716900                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:31 UTC | 23 Sep 24 11:31 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:31 UTC | 23 Sep 24 11:31 UTC |
	| ssh     | functional-716900 ssh sudo                                  | functional-716900 | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:31 UTC | 23 Sep 24 11:31 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-716900                                           | functional-716900 | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:31 UTC | 23 Sep 24 11:31 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-716900 ssh                                       | functional-716900 | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:31 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-716900 cache reload                              | functional-716900 | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:31 UTC | 23 Sep 24 11:31 UTC |
	| ssh     | functional-716900 ssh                                       | functional-716900 | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:31 UTC | 23 Sep 24 11:31 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:31 UTC | 23 Sep 24 11:31 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:31 UTC | 23 Sep 24 11:31 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-716900 kubectl --                                | functional-716900 | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:31 UTC | 23 Sep 24 11:31 UTC |
	|         | --context functional-716900                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:30:42
	Running on machine: minikube2
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:30:42.291331    8352 out.go:345] Setting OutFile to fd 840 ...
	I0923 11:30:42.373899    8352 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:30:42.373899    8352 out.go:358] Setting ErrFile to fd 828...
	I0923 11:30:42.373962    8352 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:30:42.399963    8352 out.go:352] Setting JSON to false
	I0923 11:30:42.403187    8352 start.go:129] hostinfo: {"hostname":"minikube2","uptime":1910,"bootTime":1727089132,"procs":183,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0923 11:30:42.403187    8352 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 11:30:42.408601    8352 out.go:177] * [functional-716900] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 11:30:42.413545    8352 notify.go:220] Checking for updates...
	I0923 11:30:42.416053    8352 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0923 11:30:42.421908    8352 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:30:42.424895    8352 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0923 11:30:42.427338    8352 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 11:30:42.430410    8352 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:30:42.434306    8352 config.go:182] Loaded profile config "functional-716900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:30:42.434526    8352 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:30:42.621739    8352 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0923 11:30:42.632137    8352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:30:42.956184    8352 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:81 SystemTime:2024-09-23 11:30:42.929098995 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 11:30:42.961197    8352 out.go:177] * Using the docker driver based on existing profile
	I0923 11:30:42.963172    8352 start.go:297] selected driver: docker
	I0923 11:30:42.963172    8352 start.go:901] validating driver "docker" against &{Name:functional-716900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-716900 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:30:42.964127    8352 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:30:42.979218    8352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:30:43.315249    8352 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:81 SystemTime:2024-09-23 11:30:43.290134489 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 11:30:43.423787    8352 cni.go:84] Creating CNI manager for ""
	I0923 11:30:43.423893    8352 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:30:43.424091    8352 start.go:340] cluster config:
	{Name:functional-716900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-716900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:30:43.427661    8352 out.go:177] * Starting "functional-716900" primary control-plane node in "functional-716900" cluster
	I0923 11:30:43.431388    8352 cache.go:121] Beginning downloading kic base image for docker with docker
	I0923 11:30:43.433796    8352 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 11:30:43.436637    8352 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 11:30:43.436637    8352 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 11:30:43.436637    8352 preload.go:146] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 11:30:43.436637    8352 cache.go:56] Caching tarball of preloaded images
	I0923 11:30:43.437830    8352 preload.go:172] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 11:30:43.438143    8352 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 11:30:43.438333    8352 profile.go:143] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-716900\config.json ...
	I0923 11:30:43.551587    8352 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon, skipping pull
	I0923 11:30:43.551587    8352 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in daemon, skipping load
	I0923 11:30:43.551587    8352 cache.go:194] Successfully downloaded all kic artifacts
	I0923 11:30:43.552126    8352 start.go:360] acquireMachinesLock for functional-716900: {Name:mka1ad956deb73d38669c255e039d330c7960f60 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:30:43.552401    8352 start.go:364] duration metric: took 198.7µs to acquireMachinesLock for "functional-716900"
	I0923 11:30:43.552750    8352 start.go:96] Skipping create...Using existing machine configuration
	I0923 11:30:43.552750    8352 fix.go:54] fixHost starting: 
	I0923 11:30:43.569705    8352 cli_runner.go:164] Run: docker container inspect functional-716900 --format={{.State.Status}}
	I0923 11:30:43.644226    8352 fix.go:112] recreateIfNeeded on functional-716900: state=Running err=<nil>
	W0923 11:30:43.644316    8352 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 11:30:43.648336    8352 out.go:177] * Updating the running docker "functional-716900" container ...
	I0923 11:30:43.651367    8352 machine.go:93] provisionDockerMachine start ...
	I0923 11:30:43.659273    8352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-716900
	I0923 11:30:43.739682    8352 main.go:141] libmachine: Using SSH client type: native
	I0923 11:30:43.740150    8352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x551bc0] 0x554700 <nil>  [] 0s} 127.0.0.1 54336 <nil> <nil>}
	I0923 11:30:43.740150    8352 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 11:30:43.926209    8352 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-716900
	
	I0923 11:30:43.926209    8352 ubuntu.go:169] provisioning hostname "functional-716900"
	I0923 11:30:43.935984    8352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-716900
	I0923 11:30:44.016616    8352 main.go:141] libmachine: Using SSH client type: native
	I0923 11:30:44.016616    8352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x551bc0] 0x554700 <nil>  [] 0s} 127.0.0.1 54336 <nil> <nil>}
	I0923 11:30:44.016616    8352 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-716900 && echo "functional-716900" | sudo tee /etc/hostname
	I0923 11:30:44.250744    8352 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-716900
	
	I0923 11:30:44.258224    8352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-716900
	I0923 11:30:44.338887    8352 main.go:141] libmachine: Using SSH client type: native
	I0923 11:30:44.339886    8352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x551bc0] 0x554700 <nil>  [] 0s} 127.0.0.1 54336 <nil> <nil>}
	I0923 11:30:44.339886    8352 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-716900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-716900/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-716900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 11:30:44.535954    8352 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:30:44.536943    8352 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube2\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube2\minikube-integration\.minikube}
	I0923 11:30:44.536943    8352 ubuntu.go:177] setting up certificates
	I0923 11:30:44.536943    8352 provision.go:84] configureAuth start
	I0923 11:30:44.543953    8352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-716900
	I0923 11:30:44.616999    8352 provision.go:143] copyHostCerts
	I0923 11:30:44.616999    8352 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem
	I0923 11:30:44.618004    8352 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem, removing ...
	I0923 11:30:44.618004    8352 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cert.pem
	I0923 11:30:44.618004    8352 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 11:30:44.619015    8352 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem
	I0923 11:30:44.619015    8352 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem, removing ...
	I0923 11:30:44.619015    8352 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\key.pem
	I0923 11:30:44.620004    8352 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem (1675 bytes)
	I0923 11:30:44.621021    8352 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem
	I0923 11:30:44.621021    8352 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem, removing ...
	I0923 11:30:44.621021    8352 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.pem
	I0923 11:30:44.622000    8352 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0923 11:30:44.623004    8352 provision.go:117] generating server cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-716900 san=[127.0.0.1 192.168.49.2 functional-716900 localhost minikube]
	I0923 11:30:44.811566    8352 provision.go:177] copyRemoteCerts
	I0923 11:30:44.825190    8352 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 11:30:44.833559    8352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-716900
	I0923 11:30:44.906336    8352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54336 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-716900\id_rsa Username:docker}
	I0923 11:30:45.045454    8352 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0923 11:30:45.046363    8352 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 11:30:45.097240    8352 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0923 11:30:45.097776    8352 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0923 11:30:45.151029    8352 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0923 11:30:45.151587    8352 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 11:30:45.199502    8352 provision.go:87] duration metric: took 662.5564ms to configureAuth
	I0923 11:30:45.199611    8352 ubuntu.go:193] setting minikube options for container-runtime
	I0923 11:30:45.200426    8352 config.go:182] Loaded profile config "functional-716900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:30:45.208012    8352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-716900
	I0923 11:30:45.292295    8352 main.go:141] libmachine: Using SSH client type: native
	I0923 11:30:45.293002    8352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x551bc0] 0x554700 <nil>  [] 0s} 127.0.0.1 54336 <nil> <nil>}
	I0923 11:30:45.293002    8352 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 11:30:45.487976    8352 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0923 11:30:45.487976    8352 ubuntu.go:71] root file system type: overlay
	I0923 11:30:45.487976    8352 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 11:30:45.498356    8352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-716900
	I0923 11:30:45.583390    8352 main.go:141] libmachine: Using SSH client type: native
	I0923 11:30:45.584759    8352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x551bc0] 0x554700 <nil>  [] 0s} 127.0.0.1 54336 <nil> <nil>}
	I0923 11:30:45.584759    8352 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 11:30:45.812637    8352 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 11:30:45.821806    8352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-716900
	I0923 11:30:45.903936    8352 main.go:141] libmachine: Using SSH client type: native
	I0923 11:30:45.905140    8352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x551bc0] 0x554700 <nil>  [] 0s} 127.0.0.1 54336 <nil> <nil>}
	I0923 11:30:45.905140    8352 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 11:30:46.106681    8352 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:30:46.106681    8352 machine.go:96] duration metric: took 2.4553048s to provisionDockerMachine
	I0923 11:30:46.106804    8352 start.go:293] postStartSetup for "functional-716900" (driver="docker")
	I0923 11:30:46.106882    8352 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 11:30:46.126290    8352 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 11:30:46.133882    8352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-716900
	I0923 11:30:46.215389    8352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54336 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-716900\id_rsa Username:docker}
	I0923 11:30:46.380285    8352 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 11:30:46.393421    8352 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.5 LTS"
	I0923 11:30:46.393421    8352 command_runner.go:130] > NAME="Ubuntu"
	I0923 11:30:46.393421    8352 command_runner.go:130] > VERSION_ID="22.04"
	I0923 11:30:46.393421    8352 command_runner.go:130] > VERSION="22.04.5 LTS (Jammy Jellyfish)"
	I0923 11:30:46.393421    8352 command_runner.go:130] > VERSION_CODENAME=jammy
	I0923 11:30:46.393421    8352 command_runner.go:130] > ID=ubuntu
	I0923 11:30:46.393514    8352 command_runner.go:130] > ID_LIKE=debian
	I0923 11:30:46.393514    8352 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0923 11:30:46.393514    8352 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0923 11:30:46.393514    8352 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0923 11:30:46.393514    8352 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0923 11:30:46.393514    8352 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0923 11:30:46.393514    8352 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 11:30:46.393514    8352 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 11:30:46.393514    8352 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 11:30:46.393514    8352 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 11:30:46.393514    8352 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\addons for local assets ...
	I0923 11:30:46.394232    8352 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\files for local assets ...
	I0923 11:30:46.395847    8352 filesync.go:149] local asset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\132002.pem -> 132002.pem in /etc/ssl/certs
	I0923 11:30:46.395847    8352 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\132002.pem -> /etc/ssl/certs/132002.pem
	I0923 11:30:46.396813    8352 filesync.go:149] local asset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\test\nested\copy\13200\hosts -> hosts in /etc/test/nested/copy/13200
	I0923 11:30:46.396813    8352 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\test\nested\copy\13200\hosts -> /etc/test/nested/copy/13200/hosts
	I0923 11:30:46.410078    8352 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13200
	I0923 11:30:46.430772    8352 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\132002.pem --> /etc/ssl/certs/132002.pem (1708 bytes)
	I0923 11:30:46.477949    8352 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\test\nested\copy\13200\hosts --> /etc/test/nested/copy/13200/hosts (40 bytes)
	I0923 11:30:46.523531    8352 start.go:296] duration metric: took 416.6479ms for postStartSetup
	I0923 11:30:46.535679    8352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 11:30:46.542711    8352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-716900
	I0923 11:30:46.623650    8352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54336 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-716900\id_rsa Username:docker}
	I0923 11:30:46.757567    8352 command_runner.go:130] > 1%
	I0923 11:30:46.770268    8352 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 11:30:46.784381    8352 command_runner.go:130] > 951G
	I0923 11:30:46.784381    8352 fix.go:56] duration metric: took 3.2316187s for fixHost
	I0923 11:30:46.784381    8352 start.go:83] releasing machines lock for "functional-716900", held for 3.2319156s
	I0923 11:30:46.793414    8352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-716900
	I0923 11:30:46.876504    8352 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 11:30:46.884504    8352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-716900
	I0923 11:30:46.886514    8352 ssh_runner.go:195] Run: cat /version.json
	I0923 11:30:46.895496    8352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-716900
	I0923 11:30:46.959511    8352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54336 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-716900\id_rsa Username:docker}
	I0923 11:30:46.962506    8352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54336 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-716900\id_rsa Username:docker}
	I0923 11:30:47.084046    8352 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W0923 11:30:47.086490    8352 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 11:30:47.086490    8352 command_runner.go:130] > {"iso_version": "v1.34.0-1726481713-19649", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "342ed9b49b7fd0c6b2cb4410be5c5d5251f51ed8"}
	I0923 11:30:47.101497    8352 ssh_runner.go:195] Run: systemctl --version
	I0923 11:30:47.115115    8352 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0923 11:30:47.115115    8352 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0923 11:30:47.128484    8352 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 11:30:47.143464    8352 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0923 11:30:47.143532    8352 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0923 11:30:47.143532    8352 command_runner.go:130] > Device: 8ah/138d	Inode: 224         Links: 1
	I0923 11:30:47.143585    8352 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0923 11:30:47.143585    8352 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0923 11:30:47.143585    8352 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0923 11:30:47.143585    8352 command_runner.go:130] > Change: 2024-09-23 11:08:09.900280567 +0000
	I0923 11:30:47.143585    8352 command_runner.go:130] >  Birth: 2024-09-23 11:08:09.900280567 +0000
	I0923 11:30:47.158080    8352 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0923 11:30:47.178624    8352 command_runner.go:130] ! find: '\\etc\\cni\\net.d': No such file or directory
	W0923 11:30:47.181240    8352 start.go:439] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0923 11:30:47.194005    8352 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0923 11:30:47.199965    8352 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W0923 11:30:47.199965    8352 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 11:30:47.217475    8352 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 11:30:47.217475    8352 start.go:495] detecting cgroup driver to use...
	I0923 11:30:47.217475    8352 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 11:30:47.218460    8352 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:30:47.255243    8352 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0923 11:30:47.270136    8352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 11:30:47.315793    8352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 11:30:47.338732    8352 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 11:30:47.351999    8352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 11:30:47.388746    8352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:30:47.424950    8352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 11:30:47.461195    8352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:30:47.495657    8352 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 11:30:47.528933    8352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 11:30:47.562826    8352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 11:30:47.601066    8352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 11:30:47.643512    8352 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 11:30:47.664489    8352 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0923 11:30:47.674457    8352 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 11:30:47.710911    8352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:30:47.906126    8352 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 11:30:58.545164    8352 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.6379424s)
	I0923 11:30:58.545164    8352 start.go:495] detecting cgroup driver to use...
	I0923 11:30:58.545164    8352 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 11:30:58.557697    8352 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 11:30:58.592880    8352 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0923 11:30:58.592880    8352 command_runner.go:130] > [Unit]
	I0923 11:30:58.593449    8352 command_runner.go:130] > Description=Docker Application Container Engine
	I0923 11:30:58.593449    8352 command_runner.go:130] > Documentation=https://docs.docker.com
	I0923 11:30:58.593449    8352 command_runner.go:130] > BindsTo=containerd.service
	I0923 11:30:58.593449    8352 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0923 11:30:58.593513    8352 command_runner.go:130] > Wants=network-online.target
	I0923 11:30:58.593513    8352 command_runner.go:130] > Requires=docker.socket
	I0923 11:30:58.593513    8352 command_runner.go:130] > StartLimitBurst=3
	I0923 11:30:58.593599    8352 command_runner.go:130] > StartLimitIntervalSec=60
	I0923 11:30:58.593599    8352 command_runner.go:130] > [Service]
	I0923 11:30:58.593599    8352 command_runner.go:130] > Type=notify
	I0923 11:30:58.593823    8352 command_runner.go:130] > Restart=on-failure
	I0923 11:30:58.593823    8352 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0923 11:30:58.593823    8352 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0923 11:30:58.593823    8352 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0923 11:30:58.593823    8352 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0923 11:30:58.593823    8352 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0923 11:30:58.593823    8352 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0923 11:30:58.593823    8352 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0923 11:30:58.593823    8352 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0923 11:30:58.593823    8352 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0923 11:30:58.593823    8352 command_runner.go:130] > ExecStart=
	I0923 11:30:58.593823    8352 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0923 11:30:58.593823    8352 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0923 11:30:58.593823    8352 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0923 11:30:58.593823    8352 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0923 11:30:58.593823    8352 command_runner.go:130] > LimitNOFILE=infinity
	I0923 11:30:58.593823    8352 command_runner.go:130] > LimitNPROC=infinity
	I0923 11:30:58.593823    8352 command_runner.go:130] > LimitCORE=infinity
	I0923 11:30:58.593823    8352 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0923 11:30:58.593823    8352 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0923 11:30:58.593823    8352 command_runner.go:130] > TasksMax=infinity
	I0923 11:30:58.593823    8352 command_runner.go:130] > TimeoutStartSec=0
	I0923 11:30:58.593823    8352 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0923 11:30:58.593823    8352 command_runner.go:130] > Delegate=yes
	I0923 11:30:58.593823    8352 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0923 11:30:58.593823    8352 command_runner.go:130] > KillMode=process
	I0923 11:30:58.593823    8352 command_runner.go:130] > [Install]
	I0923 11:30:58.593823    8352 command_runner.go:130] > WantedBy=multi-user.target
	I0923 11:30:58.594363    8352 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0923 11:30:58.606832    8352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 11:30:58.635423    8352 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:30:58.671020    8352 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0923 11:30:58.684976    8352 ssh_runner.go:195] Run: which cri-dockerd
	I0923 11:30:58.697842    8352 command_runner.go:130] > /usr/bin/cri-dockerd
	I0923 11:30:58.714537    8352 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 11:30:58.737695    8352 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 11:30:58.816174    8352 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 11:30:59.074796    8352 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 11:30:59.256571    8352 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 11:30:59.256806    8352 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 11:30:59.316087    8352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:30:59.509497    8352 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 11:31:00.410790    8352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 11:31:00.449002    8352 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0923 11:31:00.493068    8352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 11:31:00.530359    8352 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 11:31:00.688310    8352 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 11:31:00.854535    8352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:31:01.022958    8352 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 11:31:01.066269    8352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 11:31:01.104899    8352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:31:01.310126    8352 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 11:31:01.491168    8352 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 11:31:01.505443    8352 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 11:31:01.518051    8352 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0923 11:31:01.518051    8352 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0923 11:31:01.518051    8352 command_runner.go:130] > Device: 93h/147d	Inode: 720         Links: 1
	I0923 11:31:01.518051    8352 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0923 11:31:01.518051    8352 command_runner.go:130] > Access: 2024-09-23 11:31:01.447773983 +0000
	I0923 11:31:01.518051    8352 command_runner.go:130] > Modify: 2024-09-23 11:31:01.327762681 +0000
	I0923 11:31:01.518051    8352 command_runner.go:130] > Change: 2024-09-23 11:31:01.327762681 +0000
	I0923 11:31:01.518051    8352 command_runner.go:130] >  Birth: -
	I0923 11:31:01.518051    8352 start.go:563] Will wait 60s for crictl version
	I0923 11:31:01.533052    8352 ssh_runner.go:195] Run: which crictl
	I0923 11:31:01.545683    8352 command_runner.go:130] > /usr/bin/crictl
	I0923 11:31:01.557624    8352 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 11:31:01.711725    8352 command_runner.go:130] > Version:  0.1.0
	I0923 11:31:01.711725    8352 command_runner.go:130] > RuntimeName:  docker
	I0923 11:31:01.711725    8352 command_runner.go:130] > RuntimeVersion:  27.3.0
	I0923 11:31:01.711725    8352 command_runner.go:130] > RuntimeApiVersion:  v1
	I0923 11:31:01.711725    8352 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 11:31:01.727904    8352 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 11:31:01.905823    8352 command_runner.go:130] > 27.3.0
	I0923 11:31:01.913922    8352 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 11:31:01.974483    8352 command_runner.go:130] > 27.3.0
	I0923 11:31:01.983869    8352 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 11:31:01.994322    8352 cli_runner.go:164] Run: docker exec -t functional-716900 dig +short host.docker.internal
	I0923 11:31:02.193621    8352 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0923 11:31:02.206239    8352 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0923 11:31:02.218592    8352 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I0923 11:31:02.225643    8352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-716900
	I0923 11:31:02.312826    8352 kubeadm.go:883] updating cluster {Name:functional-716900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-716900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 11:31:02.312961    8352 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 11:31:02.322756    8352 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 11:31:02.504961    8352 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0923 11:31:02.504961    8352 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0923 11:31:02.504961    8352 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0923 11:31:02.504961    8352 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0923 11:31:02.504961    8352 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0923 11:31:02.504961    8352 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0923 11:31:02.505638    8352 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0923 11:31:02.505712    8352 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:31:02.505800    8352 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 11:31:02.505800    8352 docker.go:615] Images already preloaded, skipping extraction
	I0923 11:31:02.518165    8352 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 11:31:02.798149    8352 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0923 11:31:02.798267    8352 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0923 11:31:02.798267    8352 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0923 11:31:02.798267    8352 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0923 11:31:02.798267    8352 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0923 11:31:02.798267    8352 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0923 11:31:02.798267    8352 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0923 11:31:02.798267    8352 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:31:02.798439    8352 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 11:31:02.798439    8352 cache_images.go:84] Images are preloaded, skipping loading
	I0923 11:31:02.798439    8352 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.31.1 docker true true} ...
	I0923 11:31:02.800014    8352 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-716900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:functional-716900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 11:31:02.816207    8352 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 11:31:03.521139    8352 command_runner.go:130] > cgroupfs
	I0923 11:31:03.521139    8352 cni.go:84] Creating CNI manager for ""
	I0923 11:31:03.521139    8352 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:31:03.521139    8352 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 11:31:03.521139    8352 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-716900 NodeName:functional-716900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 11:31:03.521690    8352 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-716900"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 11:31:03.534467    8352 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 11:31:03.610101    8352 command_runner.go:130] > kubeadm
	I0923 11:31:03.610101    8352 command_runner.go:130] > kubectl
	I0923 11:31:03.610101    8352 command_runner.go:130] > kubelet
	I0923 11:31:03.610101    8352 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 11:31:03.628456    8352 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 11:31:03.719327    8352 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0923 11:31:03.822841    8352 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 11:31:03.908236    8352 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0923 11:31:04.026997    8352 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0923 11:31:04.104416    8352 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I0923 11:31:04.124977    8352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:31:05.120766    8352 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:31:05.395572    8352 certs.go:68] Setting up C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-716900 for IP: 192.168.49.2
	I0923 11:31:05.395678    8352 certs.go:194] generating shared ca certs ...
	I0923 11:31:05.395678    8352 certs.go:226] acquiring lock for ca certs: {Name:mka39b35711ce17aa627001b408a7adb2f266bbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:31:05.396983    8352 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key
	I0923 11:31:05.397677    8352 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key
	I0923 11:31:05.397913    8352 certs.go:256] generating profile certs ...
	I0923 11:31:05.399137    8352 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-716900\client.key
	I0923 11:31:05.399855    8352 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-716900\apiserver.key.b2a7e253
	I0923 11:31:05.400390    8352 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-716900\proxy-client.key
	I0923 11:31:05.400486    8352 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 11:31:05.400720    8352 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0923 11:31:05.401029    8352 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 11:31:05.401266    8352 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 11:31:05.401266    8352 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-716900\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 11:31:05.401809    8352 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-716900\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 11:31:05.402177    8352 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-716900\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 11:31:05.402386    8352 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-716900\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 11:31:05.403314    8352 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\13200.pem (1338 bytes)
	W0923 11:31:05.403314    8352 certs.go:480] ignoring C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\13200_empty.pem, impossibly tiny 0 bytes
	I0923 11:31:05.403946    8352 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0923 11:31:05.403946    8352 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0923 11:31:05.404677    8352 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0923 11:31:05.405594    8352 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0923 11:31:05.405594    8352 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\132002.pem (1708 bytes)
	I0923 11:31:05.405594    8352 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\13200.pem -> /usr/share/ca-certificates/13200.pem
	I0923 11:31:05.406918    8352 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\132002.pem -> /usr/share/ca-certificates/132002.pem
	I0923 11:31:05.406918    8352 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:31:05.409789    8352 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 11:31:05.795920    8352 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 11:31:06.098729    8352 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 11:31:06.299687    8352 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 11:31:06.504371    8352 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-716900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0923 11:31:06.630848    8352 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-716900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 11:31:06.796085    8352 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-716900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 11:31:06.906419    8352 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-716900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0923 11:31:06.997284    8352 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\13200.pem --> /usr/share/ca-certificates/13200.pem (1338 bytes)
	I0923 11:31:07.199070    8352 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\132002.pem --> /usr/share/ca-certificates/132002.pem (1708 bytes)
	I0923 11:31:07.409726    8352 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 11:31:07.612568    8352 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 11:31:07.810890    8352 ssh_runner.go:195] Run: openssl version
	I0923 11:31:07.826223    8352 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0923 11:31:07.838832    8352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 11:31:07.931014    8352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:31:07.995811    8352 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 23 11:09 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:31:07.995927    8352 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:09 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:31:08.011878    8352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:31:08.032322    8352 command_runner.go:130] > b5213941
	I0923 11:31:08.045788    8352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 11:31:08.127930    8352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13200.pem && ln -fs /usr/share/ca-certificates/13200.pem /etc/ssl/certs/13200.pem"
	I0923 11:31:08.216332    8352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13200.pem
	I0923 11:31:08.231332    8352 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 23 11:29 /usr/share/ca-certificates/13200.pem
	I0923 11:31:08.231332    8352 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 11:29 /usr/share/ca-certificates/13200.pem
	I0923 11:31:08.243334    8352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13200.pem
	I0923 11:31:08.310437    8352 command_runner.go:130] > 51391683
	I0923 11:31:08.325018    8352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13200.pem /etc/ssl/certs/51391683.0"
	I0923 11:31:08.411121    8352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132002.pem && ln -fs /usr/share/ca-certificates/132002.pem /etc/ssl/certs/132002.pem"
	I0923 11:31:08.451169    8352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132002.pem
	I0923 11:31:08.494647    8352 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 23 11:29 /usr/share/ca-certificates/132002.pem
	I0923 11:31:08.495151    8352 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 11:29 /usr/share/ca-certificates/132002.pem
	I0923 11:31:08.508953    8352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132002.pem
	I0923 11:31:08.527651    8352 command_runner.go:130] > 3ec20f2e
	I0923 11:31:08.541575    8352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132002.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 11:31:08.617551    8352 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 11:31:08.631443    8352 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 11:31:08.631443    8352 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0923 11:31:08.631443    8352 command_runner.go:130] > Device: 830h/2096d	Inode: 17030       Links: 1
	I0923 11:31:08.631443    8352 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0923 11:31:08.631443    8352 command_runner.go:130] > Access: 2024-09-23 11:29:47.999195467 +0000
	I0923 11:31:08.631443    8352 command_runner.go:130] > Modify: 2024-09-23 11:29:47.999195467 +0000
	I0923 11:31:08.631443    8352 command_runner.go:130] > Change: 2024-09-23 11:29:47.999195467 +0000
	I0923 11:31:08.631443    8352 command_runner.go:130] >  Birth: 2024-09-23 11:29:47.999195467 +0000
	I0923 11:31:08.644382    8352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 11:31:08.708297    8352 command_runner.go:130] > Certificate will not expire
	I0923 11:31:08.725511    8352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 11:31:08.742214    8352 command_runner.go:130] > Certificate will not expire
	I0923 11:31:08.755310    8352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 11:31:08.811296    8352 command_runner.go:130] > Certificate will not expire
	I0923 11:31:08.825749    8352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 11:31:08.841927    8352 command_runner.go:130] > Certificate will not expire
	I0923 11:31:08.853265    8352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 11:31:08.869640    8352 command_runner.go:130] > Certificate will not expire
	I0923 11:31:08.879677    8352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 11:31:08.895470    8352 command_runner.go:130] > Certificate will not expire
	I0923 11:31:08.896895    8352 kubeadm.go:392] StartCluster: {Name:functional-716900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-716900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:31:08.905041    8352 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 11:31:09.010860    8352 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 11:31:09.033728    8352 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0923 11:31:09.033960    8352 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0923 11:31:09.033960    8352 command_runner.go:130] > /var/lib/minikube/etcd:
	I0923 11:31:09.033960    8352 command_runner.go:130] > member
	I0923 11:31:09.034073    8352 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0923 11:31:09.034165    8352 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0923 11:31:09.049642    8352 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0923 11:31:09.099984    8352 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0923 11:31:09.108602    8352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-716900
	I0923 11:31:09.183615    8352 kubeconfig.go:125] found "functional-716900" server: "https://127.0.0.1:54335"
	I0923 11:31:09.184577    8352 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0923 11:31:09.185385    8352 kapi.go:59] client config for functional-716900: &rest.Config{Host:"https://127.0.0.1:54335", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.key", CAFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c2bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 11:31:09.186666    8352 cert_rotation.go:140] Starting client certificate rotation controller
	I0923 11:31:09.198252    8352 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0923 11:31:09.219309    8352 kubeadm.go:630] The running cluster does not require reconfiguration: 127.0.0.1
	I0923 11:31:09.219309    8352 kubeadm.go:597] duration metric: took 185.1435ms to restartPrimaryControlPlane
	I0923 11:31:09.219309    8352 kubeadm.go:394] duration metric: took 322.5389ms to StartCluster
	I0923 11:31:09.219309    8352 settings.go:142] acquiring lock: {Name:mk9684611c6005d251a6ecf406b4611c2c1e30f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:31:09.220301    8352 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0923 11:31:09.221311    8352 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\kubeconfig: {Name:mk7e72b8b9c82f9d87d6aed6af6962a1c1fa489d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:31:09.222299    8352 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 11:31:09.222299    8352 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 11:31:09.223296    8352 addons.go:69] Setting storage-provisioner=true in profile "functional-716900"
	I0923 11:31:09.223296    8352 addons.go:69] Setting default-storageclass=true in profile "functional-716900"
	I0923 11:31:09.223296    8352 config.go:182] Loaded profile config "functional-716900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:31:09.223296    8352 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-716900"
	I0923 11:31:09.223296    8352 addons.go:234] Setting addon storage-provisioner=true in "functional-716900"
	W0923 11:31:09.223296    8352 addons.go:243] addon storage-provisioner should already be in state true
	I0923 11:31:09.223296    8352 host.go:66] Checking if "functional-716900" exists ...
	I0923 11:31:09.226297    8352 out.go:177] * Verifying Kubernetes components...
	I0923 11:31:09.245061    8352 cli_runner.go:164] Run: docker container inspect functional-716900 --format={{.State.Status}}
	I0923 11:31:09.246311    8352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:31:09.247273    8352 cli_runner.go:164] Run: docker container inspect functional-716900 --format={{.State.Status}}
	I0923 11:31:09.323461    8352 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0923 11:31:09.324199    8352 kapi.go:59] client config for functional-716900: &rest.Config{Host:"https://127.0.0.1:54335", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.key", CAFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c2bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 11:31:09.325788    8352 addons.go:234] Setting addon default-storageclass=true in "functional-716900"
	W0923 11:31:09.325788    8352 addons.go:243] addon default-storageclass should already be in state true
	I0923 11:31:09.325788    8352 host.go:66] Checking if "functional-716900" exists ...
	I0923 11:31:09.325788    8352 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:31:09.328776    8352 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:31:09.328776    8352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 11:31:09.336780    8352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-716900
	I0923 11:31:09.342782    8352 cli_runner.go:164] Run: docker container inspect functional-716900 --format={{.State.Status}}
	I0923 11:31:09.413095    8352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54336 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-716900\id_rsa Username:docker}
	I0923 11:31:09.413095    8352 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 11:31:09.414067    8352 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 11:31:09.420048    8352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-716900
	I0923 11:31:09.483093    8352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54336 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-716900\id_rsa Username:docker}
	I0923 11:31:09.643198    8352 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:31:09.646145    8352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:31:09.717947    8352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-716900
	I0923 11:31:09.792081    8352 node_ready.go:35] waiting up to 6m0s for node "functional-716900" to be "Ready" ...
	I0923 11:31:09.792364    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/nodes/functional-716900
	I0923 11:31:09.792364    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:09.792364    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:09.792364    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:09.823836    8352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 11:31:10.798548    8352 round_trippers.go:574] Response Status: 200 OK in 1006 milliseconds
	I0923 11:31:10.798548    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:10.798548    8352 round_trippers.go:580]     Audit-Id: 07b23687-d017-460b-a0bd-bc27289a37f5
	I0923 11:31:10.798548    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:10.798713    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:10.798713    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0923 11:31:10.798713    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0923 11:31:10.798713    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:10 GMT
	I0923 11:31:10.799310    8352 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","resourceVersion":"397","creationTimestamp":"2024-09-23T11:29:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-716900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-716900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_30_01_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:29:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 11:31:10.802022    8352 node_ready.go:49] node "functional-716900" has status "Ready":"True"
	I0923 11:31:10.802161    8352 node_ready.go:38] duration metric: took 1.0099705s for node "functional-716900" to be "Ready" ...
	I0923 11:31:10.802256    8352 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:31:10.802494    8352 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 11:31:10.802563    8352 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 11:31:10.802716    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/namespaces/kube-system/pods
	I0923 11:31:10.802716    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:10.802716    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:10.802716    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:10.903584    8352 round_trippers.go:574] Response Status: 200 OK in 100 milliseconds
	I0923 11:31:10.903584    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:10.903584    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:10 GMT
	I0923 11:31:10.903709    8352 round_trippers.go:580]     Audit-Id: 28a6b965-3ad0-4143-b03a-27a0d3e25283
	I0923 11:31:10.903709    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:10.903757    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:10.903757    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0923 11:31:10.903757    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0923 11:31:10.905993    8352 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"440"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-bbjn9","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"183b09f8-493d-4c13-8105-8c6da1c032eb","resourceVersion":"429","creationTimestamp":"2024-09-23T11:30:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c4c1566c-4c92-48a3-9da4-7ee076ef7e25","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:30:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4c1566c-4c92-48a3-9da4-7ee076ef7e25\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51425 chars]
	I0923 11:31:10.914755    8352 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bbjn9" in "kube-system" namespace to be "Ready" ...
	I0923 11:31:10.914925    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bbjn9
	I0923 11:31:10.914925    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:10.914925    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:10.914979    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:11.006789    8352 round_trippers.go:574] Response Status: 200 OK in 91 milliseconds
	I0923 11:31:11.006906    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:11.006906    8352 round_trippers.go:580]     Audit-Id: 0c8ff56f-06d0-45c1-862a-72e95d59b9d5
	I0923 11:31:11.006906    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:11.006906    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:11.006906    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:11.007038    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:11.007038    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:11 GMT
	I0923 11:31:11.007289    8352 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-bbjn9","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"183b09f8-493d-4c13-8105-8c6da1c032eb","resourceVersion":"429","creationTimestamp":"2024-09-23T11:30:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c4c1566c-4c92-48a3-9da4-7ee076ef7e25","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:30:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4c1566c-4c92-48a3-9da4-7ee076ef7e25\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6495 chars]
	I0923 11:31:11.008264    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/nodes/functional-716900
	I0923 11:31:11.008264    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:11.008264    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:11.008264    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:11.096344    8352 round_trippers.go:574] Response Status: 200 OK in 88 milliseconds
	I0923 11:31:11.096344    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:11.097292    8352 round_trippers.go:580]     Audit-Id: 62067100-1876-4d21-bad4-71af34c7cdc0
	I0923 11:31:11.097292    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:11.097292    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:11.097292    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:11.097292    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:11.097359    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:11 GMT
	I0923 11:31:11.098331    8352 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","resourceVersion":"397","creationTimestamp":"2024-09-23T11:29:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-716900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-716900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_30_01_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:29:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 11:31:11.099122    8352 pod_ready.go:93] pod "coredns-7c65d6cfc9-bbjn9" in "kube-system" namespace has status "Ready":"True"
	I0923 11:31:11.099122    8352 pod_ready.go:82] duration metric: took 184.3662ms for pod "coredns-7c65d6cfc9-bbjn9" in "kube-system" namespace to be "Ready" ...
	I0923 11:31:11.099122    8352 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-716900" in "kube-system" namespace to be "Ready" ...
	I0923 11:31:11.099481    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/namespaces/kube-system/pods/etcd-functional-716900
	I0923 11:31:11.099481    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:11.099481    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:11.099481    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:11.197177    8352 round_trippers.go:574] Response Status: 200 OK in 97 milliseconds
	I0923 11:31:11.197177    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:11.197286    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:11 GMT
	I0923 11:31:11.197286    8352 round_trippers.go:580]     Audit-Id: 791b1076-2328-4b61-ba6e-a76cf5a88fed
	I0923 11:31:11.197286    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:11.197286    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:11.197286    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:11.197286    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:11.197608    8352 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-716900","namespace":"kube-system","uid":"8ed18d6e-b28b-44b2-bf21-0b1d273ebcb3","resourceVersion":"296","creationTimestamp":"2024-09-23T11:30:01Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"3e0a6d91e619435c748061b92b56b32c","kubernetes.io/config.mirror":"3e0a6d91e619435c748061b92b56b32c","kubernetes.io/config.seen":"2024-09-23T11:30:00.739282955Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:30:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6459 chars]
	I0923 11:31:11.198726    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/nodes/functional-716900
	I0923 11:31:11.198726    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:11.198831    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:11.198831    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:11.214225    8352 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0923 11:31:11.214225    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:11.214225    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:11 GMT
	I0923 11:31:11.214225    8352 round_trippers.go:580]     Audit-Id: 3e94f1ac-2c8e-4985-a1b6-5eaae8446117
	I0923 11:31:11.214225    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:11.214225    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:11.214225    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:11.214225    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:11.214225    8352 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","resourceVersion":"397","creationTimestamp":"2024-09-23T11:29:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-716900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-716900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_30_01_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:29:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 11:31:11.215003    8352 pod_ready.go:93] pod "etcd-functional-716900" in "kube-system" namespace has status "Ready":"True"
	I0923 11:31:11.215003    8352 pod_ready.go:82] duration metric: took 115.651ms for pod "etcd-functional-716900" in "kube-system" namespace to be "Ready" ...
	I0923 11:31:11.215003    8352 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-716900" in "kube-system" namespace to be "Ready" ...
	I0923 11:31:11.215321    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-716900
	I0923 11:31:11.215359    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:11.215359    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:11.215359    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:11.301347    8352 round_trippers.go:574] Response Status: 200 OK in 85 milliseconds
	I0923 11:31:11.301347    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:11.301347    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:11.301347    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:11 GMT
	I0923 11:31:11.301347    8352 round_trippers.go:580]     Audit-Id: 6223eb91-e766-46db-9a29-88662e6b3801
	I0923 11:31:11.301347    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:11.301347    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:11.301590    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:11.302280    8352 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-716900","namespace":"kube-system","uid":"aa522876-b63f-4611-8e37-874a595b0db2","resourceVersion":"399","creationTimestamp":"2024-09-23T11:30:01Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"77419c0fead389d47491ffc9360a1dde","kubernetes.io/config.mirror":"77419c0fead389d47491ffc9360a1dde","kubernetes.io/config.seen":"2024-09-23T11:30:00.739315658Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:30:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8535 chars]
	I0923 11:31:11.303500    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/nodes/functional-716900
	I0923 11:31:11.303602    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:11.303645    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:11.303645    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:11.311367    8352 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 11:31:11.311367    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:11.311905    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:11.312010    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:11.312072    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:11.312072    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:11.312072    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:11 GMT
	I0923 11:31:11.312129    8352 round_trippers.go:580]     Audit-Id: a97c53ea-e65d-43e9-a10c-867f329ad2a9
	I0923 11:31:11.312355    8352 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","resourceVersion":"397","creationTimestamp":"2024-09-23T11:29:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-716900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-716900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_30_01_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:29:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 11:31:11.313004    8352 pod_ready.go:93] pod "kube-apiserver-functional-716900" in "kube-system" namespace has status "Ready":"True"
	I0923 11:31:11.313004    8352 pod_ready.go:82] duration metric: took 98.0007ms for pod "kube-apiserver-functional-716900" in "kube-system" namespace to be "Ready" ...
	I0923 11:31:11.313078    8352 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-716900" in "kube-system" namespace to be "Ready" ...
	I0923 11:31:11.313147    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-716900
	I0923 11:31:11.313147    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:11.313147    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:11.313147    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:11.425865    8352 round_trippers.go:574] Response Status: 200 OK in 112 milliseconds
	I0923 11:31:11.425941    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:11.425941    8352 round_trippers.go:580]     Audit-Id: b43b2d0e-9a4e-4d5f-b095-d03cb50fa0a9
	I0923 11:31:11.426014    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:11.426044    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:11.426044    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:11.426044    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:11.426044    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:11 GMT
	I0923 11:31:11.426577    8352 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-716900","namespace":"kube-system","uid":"a9e60b32-4a30-4f8a-a070-fa4eecb6bcb8","resourceVersion":"388","creationTimestamp":"2024-09-23T11:30:01Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0e11769b158aee9d331c36a8821a5dbe","kubernetes.io/config.mirror":"0e11769b158aee9d331c36a8821a5dbe","kubernetes.io/config.seen":"2024-09-23T11:30:00.739317558Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:30:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8110 chars]
	I0923 11:31:11.427816    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/nodes/functional-716900
	I0923 11:31:11.427816    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:11.427816    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:11.427816    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:11.434821    8352 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 11:31:11.434821    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:11.434821    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:11.434821    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:11.434821    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:11 GMT
	I0923 11:31:11.434821    8352 round_trippers.go:580]     Audit-Id: c83d3aee-1d47-4e07-86ed-a3ffa2a6e1fd
	I0923 11:31:11.434821    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:11.434821    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:11.435355    8352 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","resourceVersion":"397","creationTimestamp":"2024-09-23T11:29:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-716900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-716900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_30_01_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:29:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 11:31:11.435875    8352 pod_ready.go:93] pod "kube-controller-manager-functional-716900" in "kube-system" namespace has status "Ready":"True"
	I0923 11:31:11.435875    8352 pod_ready.go:82] duration metric: took 122.7964ms for pod "kube-controller-manager-functional-716900" in "kube-system" namespace to be "Ready" ...
	I0923 11:31:11.435875    8352 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bbml6" in "kube-system" namespace to be "Ready" ...
	I0923 11:31:11.435875    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/namespaces/kube-system/pods/kube-proxy-bbml6
	I0923 11:31:11.435875    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:11.435875    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:11.435875    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:11.506005    8352 round_trippers.go:574] Response Status: 200 OK in 70 milliseconds
	I0923 11:31:11.506379    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:11.506379    8352 round_trippers.go:580]     Audit-Id: e877d18f-108d-4b5a-a0b4-89b774c61f84
	I0923 11:31:11.506379    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:11.506379    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:11.506379    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:11.506379    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:11.506379    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:11 GMT
	I0923 11:31:11.506795    8352 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bbml6","generateName":"kube-proxy-","namespace":"kube-system","uid":"509ccef8-076a-4237-9930-f6a9219c6c05","resourceVersion":"442","creationTimestamp":"2024-09-23T11:30:05Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7c3d94-2de9-47f7-866c-840b11b995ed","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:30:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7c3d94-2de9-47f7-866c-840b11b995ed\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6587 chars]
	I0923 11:31:11.508175    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/nodes/functional-716900
	I0923 11:31:11.508175    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:11.508238    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:11.508238    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:11.516572    8352 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0923 11:31:11.516572    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:11.516572    8352 round_trippers.go:580]     Audit-Id: 912f24e9-efae-455c-8a2f-b48719cacd7c
	I0923 11:31:11.516572    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:11.516572    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:11.516572    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:11.516572    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:11.516572    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:11 GMT
	I0923 11:31:11.517314    8352 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","resourceVersion":"397","creationTimestamp":"2024-09-23T11:29:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-716900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-716900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_30_01_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:29:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 11:31:11.936438    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/namespaces/kube-system/pods/kube-proxy-bbml6
	I0923 11:31:11.936438    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:11.936438    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:11.936438    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:12.008558    8352 round_trippers.go:574] Response Status: 200 OK in 72 milliseconds
	I0923 11:31:12.008558    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:12.008558    8352 round_trippers.go:580]     Audit-Id: 03251cc0-ccb2-46bf-8b87-58d7adb32052
	I0923 11:31:12.008558    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:12.008558    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:12.008558    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:12.008558    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:12.008558    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:12 GMT
	I0923 11:31:12.009573    8352 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bbml6","generateName":"kube-proxy-","namespace":"kube-system","uid":"509ccef8-076a-4237-9930-f6a9219c6c05","resourceVersion":"442","creationTimestamp":"2024-09-23T11:30:05Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7c3d94-2de9-47f7-866c-840b11b995ed","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:30:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7c3d94-2de9-47f7-866c-840b11b995ed\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6587 chars]
	I0923 11:31:12.010285    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/nodes/functional-716900
	I0923 11:31:12.010285    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:12.010285    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:12.010285    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:12.115010    8352 round_trippers.go:574] Response Status: 200 OK in 104 milliseconds
	I0923 11:31:12.116275    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:12.116275    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:12.116275    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:12.116347    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:12.116347    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:12.116347    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:12 GMT
	I0923 11:31:12.116411    8352 round_trippers.go:580]     Audit-Id: ef93bf73-a5b7-40c3-8164-28a7f59eeb35
	I0923 11:31:12.116672    8352 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","resourceVersion":"397","creationTimestamp":"2024-09-23T11:29:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-716900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-716900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_30_01_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:29:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 11:31:12.437024    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/namespaces/kube-system/pods/kube-proxy-bbml6
	I0923 11:31:12.437483    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:12.437483    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:12.437483    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:12.497338    8352 round_trippers.go:574] Response Status: 200 OK in 59 milliseconds
	I0923 11:31:12.497497    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:12.497497    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:12.497497    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:12.497597    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:12.497597    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:12 GMT
	I0923 11:31:12.497597    8352 round_trippers.go:580]     Audit-Id: d1810faa-d0fa-4ac0-8f68-303d99718c0f
	I0923 11:31:12.497643    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:12.498411    8352 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bbml6","generateName":"kube-proxy-","namespace":"kube-system","uid":"509ccef8-076a-4237-9930-f6a9219c6c05","resourceVersion":"478","creationTimestamp":"2024-09-23T11:30:05Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7c3d94-2de9-47f7-866c-840b11b995ed","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:30:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7c3d94-2de9-47f7-866c-840b11b995ed\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6396 chars]
	I0923 11:31:12.500231    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/nodes/functional-716900
	I0923 11:31:12.500231    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:12.500231    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:12.500231    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:12.537670    8352 round_trippers.go:574] Response Status: 200 OK in 37 milliseconds
	I0923 11:31:12.537670    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:12.537670    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:12.537670    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:12 GMT
	I0923 11:31:12.537670    8352 round_trippers.go:580]     Audit-Id: d14a98d2-1ab4-4f77-abf0-5237debc0c23
	I0923 11:31:12.537670    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:12.537670    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:12.537670    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:12.537670    8352 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","resourceVersion":"397","creationTimestamp":"2024-09-23T11:29:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-716900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-716900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_30_01_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:29:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 11:31:12.538465    8352 pod_ready.go:93] pod "kube-proxy-bbml6" in "kube-system" namespace has status "Ready":"True"
	I0923 11:31:12.538465    8352 pod_ready.go:82] duration metric: took 1.1025851s for pod "kube-proxy-bbml6" in "kube-system" namespace to be "Ready" ...
	I0923 11:31:12.538465    8352 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-716900" in "kube-system" namespace to be "Ready" ...
	I0923 11:31:12.538465    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-716900
	I0923 11:31:12.538465    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:12.538465    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:12.538465    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:12.599580    8352 round_trippers.go:574] Response Status: 200 OK in 61 milliseconds
	I0923 11:31:12.599663    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:12.599663    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:12.599722    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:12 GMT
	I0923 11:31:12.599722    8352 round_trippers.go:580]     Audit-Id: e38ccc2e-2a79-40a5-a975-a51cd4256612
	I0923 11:31:12.599767    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:12.599767    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:12.599767    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:12.599767    8352 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-716900","namespace":"kube-system","uid":"f6a6329b-d8c4-402b-af2d-314b630733aa","resourceVersion":"475","creationTimestamp":"2024-09-23T11:30:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70bcdf2a7b9a7c02f733e0d63ef8fea8","kubernetes.io/config.mirror":"70bcdf2a7b9a7c02f733e0d63ef8fea8","kubernetes.io/config.seen":"2024-09-23T11:30:00.739319158Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:30:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0923 11:31:12.600804    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/nodes/functional-716900
	I0923 11:31:12.600919    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:12.600919    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:12.600919    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:12.607384    8352 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 11:31:12.607497    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:12.607497    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:12.607497    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:12 GMT
	I0923 11:31:12.607497    8352 round_trippers.go:580]     Audit-Id: 8a4c20c3-5c5f-4ef0-8e67-fb650906631a
	I0923 11:31:12.607497    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:12.607497    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:12.607624    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:12.607692    8352 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","resourceVersion":"397","creationTimestamp":"2024-09-23T11:29:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-716900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-716900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_30_01_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:29:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 11:31:13.038928    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-716900
	I0923 11:31:13.038928    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:13.038928    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:13.038928    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:13.046755    8352 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 11:31:13.046755    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:13.046755    8352 round_trippers.go:580]     Audit-Id: cb3f4dc5-c8a2-4d7e-9de7-c267695caac4
	I0923 11:31:13.046755    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:13.046755    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:13.046755    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:13.046755    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:13.046755    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:13 GMT
	I0923 11:31:13.046755    8352 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-716900","namespace":"kube-system","uid":"f6a6329b-d8c4-402b-af2d-314b630733aa","resourceVersion":"475","creationTimestamp":"2024-09-23T11:30:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70bcdf2a7b9a7c02f733e0d63ef8fea8","kubernetes.io/config.mirror":"70bcdf2a7b9a7c02f733e0d63ef8fea8","kubernetes.io/config.seen":"2024-09-23T11:30:00.739319158Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:30:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0923 11:31:13.047814    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/nodes/functional-716900
	I0923 11:31:13.047865    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:13.047891    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:13.047891    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:13.056164    8352 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0923 11:31:13.056164    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:13.056164    8352 round_trippers.go:580]     Audit-Id: f9332adc-7a02-4455-ba05-8997f2c28263
	I0923 11:31:13.056164    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:13.056164    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:13.056164    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:13.056164    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:13.056164    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:13 GMT
	I0923 11:31:13.056164    8352 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","resourceVersion":"397","creationTimestamp":"2024-09-23T11:29:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-716900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-716900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_30_01_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:29:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 11:31:13.331440    8352 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0923 11:31:13.331440    8352 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0923 11:31:13.331440    8352 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0923 11:31:13.331440    8352 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0923 11:31:13.331440    8352 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0923 11:31:13.331440    8352 command_runner.go:130] > pod/storage-provisioner configured
	I0923 11:31:13.331440    8352 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0923 11:31:13.331440    8352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.6852802s)
	I0923 11:31:13.332124    8352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.5075904s)
	I0923 11:31:13.332270    8352 round_trippers.go:463] GET https://127.0.0.1:54335/apis/storage.k8s.io/v1/storageclasses
	I0923 11:31:13.332270    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:13.332270    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:13.332270    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:13.339116    8352 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 11:31:13.339116    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:13.339116    8352 round_trippers.go:580]     Content-Length: 1273
	I0923 11:31:13.339116    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:13 GMT
	I0923 11:31:13.339256    8352 round_trippers.go:580]     Audit-Id: 0e47d9bc-f944-4e70-a175-139eaa853b20
	I0923 11:31:13.339283    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:13.339283    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:13.339283    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:13.339283    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:13.339348    8352 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"516"},"items":[{"metadata":{"name":"standard","uid":"e6eac232-3604-45e6-a2aa-f252555b4e26","resourceVersion":"342","creationTimestamp":"2024-09-23T11:30:07Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-23T11:30:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0923 11:31:13.340164    8352 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"e6eac232-3604-45e6-a2aa-f252555b4e26","resourceVersion":"342","creationTimestamp":"2024-09-23T11:30:07Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-23T11:30:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0923 11:31:13.340264    8352 round_trippers.go:463] PUT https://127.0.0.1:54335/apis/storage.k8s.io/v1/storageclasses/standard
	I0923 11:31:13.340264    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:13.340264    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:13.340264    8352 round_trippers.go:473]     Content-Type: application/json
	I0923 11:31:13.340359    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:13.347729    8352 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 11:31:13.347770    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:13.347825    8352 round_trippers.go:580]     Audit-Id: 92f6b3a0-d5e3-44aa-acf1-1056c379bfee
	I0923 11:31:13.347867    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:13.347902    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:13.347902    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:13.347902    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:13.347902    8352 round_trippers.go:580]     Content-Length: 1220
	I0923 11:31:13.347902    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:13 GMT
	I0923 11:31:13.347951    8352 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"e6eac232-3604-45e6-a2aa-f252555b4e26","resourceVersion":"342","creationTimestamp":"2024-09-23T11:30:07Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-23T11:30:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0923 11:31:13.352424    8352 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0923 11:31:13.355079    8352 addons.go:510] duration metric: took 4.1327638s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0923 11:31:13.538834    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-716900
	I0923 11:31:13.538834    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:13.538834    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:13.538834    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:13.544396    8352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 11:31:13.544396    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:13.544493    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:13 GMT
	I0923 11:31:13.544493    8352 round_trippers.go:580]     Audit-Id: b7132e31-6f44-4df9-bbcb-505d9f49a560
	I0923 11:31:13.544493    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:13.544493    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:13.544493    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:13.544493    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:13.544736    8352 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-716900","namespace":"kube-system","uid":"f6a6329b-d8c4-402b-af2d-314b630733aa","resourceVersion":"475","creationTimestamp":"2024-09-23T11:30:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70bcdf2a7b9a7c02f733e0d63ef8fea8","kubernetes.io/config.mirror":"70bcdf2a7b9a7c02f733e0d63ef8fea8","kubernetes.io/config.seen":"2024-09-23T11:30:00.739319158Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:30:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0923 11:31:13.545314    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/nodes/functional-716900
	I0923 11:31:13.545314    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:13.545314    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:13.545314    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:13.551306    8352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 11:31:13.551306    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:13.551306    8352 round_trippers.go:580]     Audit-Id: 6f72f1a6-8095-408c-a4e3-f96e69575431
	I0923 11:31:13.551306    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:13.551306    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:13.551306    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:13.551306    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:13.551306    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:13 GMT
	I0923 11:31:13.551866    8352 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","resourceVersion":"397","creationTimestamp":"2024-09-23T11:29:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-716900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-716900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_30_01_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:29:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 11:31:14.039507    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-716900
	I0923 11:31:14.039507    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:14.039507    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:14.039507    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:14.044758    8352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 11:31:14.044758    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:14.044758    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:14 GMT
	I0923 11:31:14.044758    8352 round_trippers.go:580]     Audit-Id: 6bef710e-b5bc-4b31-a891-8d3b06dbcb32
	I0923 11:31:14.044758    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:14.044758    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:14.044758    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:14.044758    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:14.044758    8352 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-716900","namespace":"kube-system","uid":"f6a6329b-d8c4-402b-af2d-314b630733aa","resourceVersion":"475","creationTimestamp":"2024-09-23T11:30:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70bcdf2a7b9a7c02f733e0d63ef8fea8","kubernetes.io/config.mirror":"70bcdf2a7b9a7c02f733e0d63ef8fea8","kubernetes.io/config.seen":"2024-09-23T11:30:00.739319158Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:30:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0923 11:31:14.045782    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/nodes/functional-716900
	I0923 11:31:14.045782    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:14.045782    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:14.045782    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:14.052524    8352 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 11:31:14.052524    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:14.052524    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:14 GMT
	I0923 11:31:14.052524    8352 round_trippers.go:580]     Audit-Id: 7301fd98-8f3c-4982-9d96-314d6e7441a1
	I0923 11:31:14.052524    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:14.052524    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:14.052524    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:14.052524    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:14.052524    8352 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","resourceVersion":"397","creationTimestamp":"2024-09-23T11:29:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-716900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-716900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_30_01_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:29:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 11:31:14.539240    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-716900
	I0923 11:31:14.539240    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:14.539240    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:14.539240    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:14.545001    8352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 11:31:14.545528    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:14.545528    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:14.545528    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:14.545628    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:14.545628    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:14.545653    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:14 GMT
	I0923 11:31:14.545653    8352 round_trippers.go:580]     Audit-Id: 43d6ff42-2b62-42f2-a27b-2746f2290363
	I0923 11:31:14.545973    8352 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-716900","namespace":"kube-system","uid":"f6a6329b-d8c4-402b-af2d-314b630733aa","resourceVersion":"475","creationTimestamp":"2024-09-23T11:30:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70bcdf2a7b9a7c02f733e0d63ef8fea8","kubernetes.io/config.mirror":"70bcdf2a7b9a7c02f733e0d63ef8fea8","kubernetes.io/config.seen":"2024-09-23T11:30:00.739319158Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:30:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0923 11:31:14.546261    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/nodes/functional-716900
	I0923 11:31:14.546261    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:14.546261    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:14.546261    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:14.555202    8352 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0923 11:31:14.555202    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:14.555202    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:14.555202    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:14 GMT
	I0923 11:31:14.555321    8352 round_trippers.go:580]     Audit-Id: 6134e534-a3d2-41d3-b8e6-f22298d574bc
	I0923 11:31:14.555321    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:14.555321    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:14.555321    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:14.556997    8352 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","resourceVersion":"397","creationTimestamp":"2024-09-23T11:29:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-716900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-716900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_30_01_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:29:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 11:31:14.556997    8352 pod_ready.go:103] pod "kube-scheduler-functional-716900" in "kube-system" namespace has status "Ready":"False"
	I0923 11:31:15.038475    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-716900
	I0923 11:31:15.038475    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:15.038475    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:15.038475    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:15.045276    8352 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 11:31:15.045276    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:15.045276    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:15 GMT
	I0923 11:31:15.045276    8352 round_trippers.go:580]     Audit-Id: c30c30d9-b46e-4390-8ee7-ea1f38b5dec7
	I0923 11:31:15.045276    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:15.045358    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:15.045358    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:15.045358    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:15.046297    8352 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-716900","namespace":"kube-system","uid":"f6a6329b-d8c4-402b-af2d-314b630733aa","resourceVersion":"523","creationTimestamp":"2024-09-23T11:30:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70bcdf2a7b9a7c02f733e0d63ef8fea8","kubernetes.io/config.mirror":"70bcdf2a7b9a7c02f733e0d63ef8fea8","kubernetes.io/config.seen":"2024-09-23T11:30:00.739319158Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:30:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5440 chars]
	I0923 11:31:15.046876    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/nodes/functional-716900
	I0923 11:31:15.046876    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:15.046876    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:15.046876    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:15.053873    8352 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 11:31:15.053873    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:15.053873    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:15.053873    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:15.053873    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:15.053873    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:15.053873    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:15 GMT
	I0923 11:31:15.053873    8352 round_trippers.go:580]     Audit-Id: 349f129d-9291-489b-89de-45e7d26a2991
	I0923 11:31:15.054573    8352 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","resourceVersion":"397","creationTimestamp":"2024-09-23T11:29:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-716900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-716900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_30_01_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:29:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 11:31:15.538498    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-716900
	I0923 11:31:15.538498    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:15.538498    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:15.538498    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:15.544839    8352 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 11:31:15.544933    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:15.544933    8352 round_trippers.go:580]     Audit-Id: 5d20cbd3-0237-42b2-8950-0d7bc9fe0a32
	I0923 11:31:15.544933    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:15.544933    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:15.544933    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:15.544933    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:15.544933    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:15 GMT
	I0923 11:31:15.545202    8352 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-716900","namespace":"kube-system","uid":"f6a6329b-d8c4-402b-af2d-314b630733aa","resourceVersion":"529","creationTimestamp":"2024-09-23T11:30:01Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70bcdf2a7b9a7c02f733e0d63ef8fea8","kubernetes.io/config.mirror":"70bcdf2a7b9a7c02f733e0d63ef8fea8","kubernetes.io/config.seen":"2024-09-23T11:30:00.739319158Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:30:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5197 chars]
	I0923 11:31:15.545202    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/nodes/functional-716900
	I0923 11:31:15.545202    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:15.545202    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:15.545202    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:15.552634    8352 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 11:31:15.552634    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:15.552634    8352 round_trippers.go:580]     Audit-Id: e1f90bcc-da48-4a9e-9150-3f517009fc09
	I0923 11:31:15.552634    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:15.552634    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:15.552634    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:15.552634    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:15.552634    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:15 GMT
	I0923 11:31:15.552634    8352 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","resourceVersion":"397","creationTimestamp":"2024-09-23T11:29:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-716900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-716900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_30_01_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:29:57Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 11:31:15.553233    8352 pod_ready.go:93] pod "kube-scheduler-functional-716900" in "kube-system" namespace has status "Ready":"True"
	I0923 11:31:15.553233    8352 pod_ready.go:82] duration metric: took 3.0147563s for pod "kube-scheduler-functional-716900" in "kube-system" namespace to be "Ready" ...
	I0923 11:31:15.553233    8352 pod_ready.go:39] duration metric: took 4.7509584s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:31:15.553233    8352 api_server.go:52] waiting for apiserver process to appear ...
	I0923 11:31:15.567017    8352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:31:15.594928    8352 command_runner.go:130] > 5682
	I0923 11:31:15.594928    8352 api_server.go:72] duration metric: took 6.3726049s to wait for apiserver process to appear ...
	I0923 11:31:15.594928    8352 api_server.go:88] waiting for apiserver healthz status ...
	I0923 11:31:15.594928    8352 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54335/healthz ...
	I0923 11:31:15.610899    8352 api_server.go:279] https://127.0.0.1:54335/healthz returned 200:
	ok
	I0923 11:31:15.611055    8352 round_trippers.go:463] GET https://127.0.0.1:54335/version
	I0923 11:31:15.611098    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:15.611131    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:15.611131    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:15.614731    8352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:31:15.614765    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:15.614814    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:15.614814    8352 round_trippers.go:580]     Content-Length: 263
	I0923 11:31:15.614814    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:15 GMT
	I0923 11:31:15.614814    8352 round_trippers.go:580]     Audit-Id: f13d887e-c827-4c51-998d-e150de970e75
	I0923 11:31:15.614846    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:15.614846    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:15.614846    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:15.614846    8352 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0923 11:31:15.615043    8352 api_server.go:141] control plane version: v1.31.1
	I0923 11:31:15.615080    8352 api_server.go:131] duration metric: took 20.1513ms to wait for apiserver health ...
	I0923 11:31:15.615122    8352 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 11:31:15.615301    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/namespaces/kube-system/pods
	I0923 11:31:15.615301    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:15.615354    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:15.615354    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:15.622122    8352 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 11:31:15.622122    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:15.622122    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:15.622122    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:15.622122    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:15 GMT
	I0923 11:31:15.622122    8352 round_trippers.go:580]     Audit-Id: 8e33ddad-f14e-45f7-b39c-103315b246ab
	I0923 11:31:15.622122    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:15.622122    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:15.623615    8352 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"530"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-bbjn9","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"183b09f8-493d-4c13-8105-8c6da1c032eb","resourceVersion":"464","creationTimestamp":"2024-09-23T11:30:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c4c1566c-4c92-48a3-9da4-7ee076ef7e25","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:30:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4c1566c-4c92-48a3-9da4-7ee076ef7e25\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54018 chars]
	I0923 11:31:15.626289    8352 system_pods.go:59] 7 kube-system pods found
	I0923 11:31:15.626313    8352 system_pods.go:61] "coredns-7c65d6cfc9-bbjn9" [183b09f8-493d-4c13-8105-8c6da1c032eb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0923 11:31:15.626313    8352 system_pods.go:61] "etcd-functional-716900" [8ed18d6e-b28b-44b2-bf21-0b1d273ebcb3] Running
	I0923 11:31:15.626313    8352 system_pods.go:61] "kube-apiserver-functional-716900" [aa522876-b63f-4611-8e37-874a595b0db2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0923 11:31:15.626313    8352 system_pods.go:61] "kube-controller-manager-functional-716900" [a9e60b32-4a30-4f8a-a070-fa4eecb6bcb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0923 11:31:15.626313    8352 system_pods.go:61] "kube-proxy-bbml6" [509ccef8-076a-4237-9930-f6a9219c6c05] Running
	I0923 11:31:15.626313    8352 system_pods.go:61] "kube-scheduler-functional-716900" [f6a6329b-d8c4-402b-af2d-314b630733aa] Running
	I0923 11:31:15.626313    8352 system_pods.go:61] "storage-provisioner" [acd58970-823c-4772-9860-2c7fa16de877] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0923 11:31:15.626313    8352 system_pods.go:74] duration metric: took 11.1364ms to wait for pod list to return data ...
	I0923 11:31:15.626313    8352 default_sa.go:34] waiting for default service account to be created ...
	I0923 11:31:15.626313    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/namespaces/default/serviceaccounts
	I0923 11:31:15.626313    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:15.626313    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:15.626313    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:15.633495    8352 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 11:31:15.633495    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:15.633495    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:15.633495    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:15.633495    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:15.633495    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:15.633495    8352 round_trippers.go:580]     Content-Length: 261
	I0923 11:31:15.633495    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:15 GMT
	I0923 11:31:15.633495    8352 round_trippers.go:580]     Audit-Id: 0fce5426-2f65-496d-9610-dd10c097b91a
	I0923 11:31:15.633495    8352 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"530"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"6be9dde5-a1df-4c43-bb8a-524c84633379","resourceVersion":"313","creationTimestamp":"2024-09-23T11:30:05Z"}}]}
	I0923 11:31:15.633495    8352 default_sa.go:45] found service account: "default"
	I0923 11:31:15.633495    8352 default_sa.go:55] duration metric: took 7.1819ms for default service account to be created ...
	I0923 11:31:15.633495    8352 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 11:31:15.633495    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/namespaces/kube-system/pods
	I0923 11:31:15.633495    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:15.633495    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:15.633495    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:15.639101    8352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 11:31:15.639172    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:15.639172    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:15.639172    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:15.639172    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:15.639172    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:15.639172    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:15 GMT
	I0923 11:31:15.639172    8352 round_trippers.go:580]     Audit-Id: 48c3a95d-224f-4517-ade0-18a67fbd8e71
	I0923 11:31:15.642295    8352 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"530"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-bbjn9","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"183b09f8-493d-4c13-8105-8c6da1c032eb","resourceVersion":"464","creationTimestamp":"2024-09-23T11:30:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c4c1566c-4c92-48a3-9da4-7ee076ef7e25","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:30:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4c1566c-4c92-48a3-9da4-7ee076ef7e25\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54018 chars]
	I0923 11:31:15.644890    8352 system_pods.go:86] 7 kube-system pods found
	I0923 11:31:15.644890    8352 system_pods.go:89] "coredns-7c65d6cfc9-bbjn9" [183b09f8-493d-4c13-8105-8c6da1c032eb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0923 11:31:15.644890    8352 system_pods.go:89] "etcd-functional-716900" [8ed18d6e-b28b-44b2-bf21-0b1d273ebcb3] Running
	I0923 11:31:15.644890    8352 system_pods.go:89] "kube-apiserver-functional-716900" [aa522876-b63f-4611-8e37-874a595b0db2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0923 11:31:15.644890    8352 system_pods.go:89] "kube-controller-manager-functional-716900" [a9e60b32-4a30-4f8a-a070-fa4eecb6bcb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0923 11:31:15.644890    8352 system_pods.go:89] "kube-proxy-bbml6" [509ccef8-076a-4237-9930-f6a9219c6c05] Running
	I0923 11:31:15.644890    8352 system_pods.go:89] "kube-scheduler-functional-716900" [f6a6329b-d8c4-402b-af2d-314b630733aa] Running
	I0923 11:31:15.644890    8352 system_pods.go:89] "storage-provisioner" [acd58970-823c-4772-9860-2c7fa16de877] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0923 11:31:15.644890    8352 system_pods.go:126] duration metric: took 11.3951ms to wait for k8s-apps to be running ...
	I0923 11:31:15.644890    8352 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 11:31:15.657994    8352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 11:31:15.684366    8352 system_svc.go:56] duration metric: took 39.4764ms WaitForService to wait for kubelet
	I0923 11:31:15.684366    8352 kubeadm.go:582] duration metric: took 6.4620426s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:31:15.684366    8352 node_conditions.go:102] verifying NodePressure condition ...
	I0923 11:31:15.684909    8352 round_trippers.go:463] GET https://127.0.0.1:54335/api/v1/nodes
	I0923 11:31:15.685011    8352 round_trippers.go:469] Request Headers:
	I0923 11:31:15.685055    8352 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:31:15.685086    8352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:31:15.693203    8352 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0923 11:31:15.693203    8352 round_trippers.go:577] Response Headers:
	I0923 11:31:15.693203    8352 round_trippers.go:580]     Audit-Id: 2dc1526c-ab66-4896-9840-a52c9fb65fdf
	I0923 11:31:15.693203    8352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:31:15.693203    8352 round_trippers.go:580]     Content-Type: application/json
	I0923 11:31:15.693203    8352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf9dba29-4117-44be-81a3-ae291af0f140
	I0923 11:31:15.693203    8352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d587a30b-5d18-4732-96a6-9706aa502dd9
	I0923 11:31:15.693203    8352 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:31:15 GMT
	I0923 11:31:15.693203    8352 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"530"},"items":[{"metadata":{"name":"functional-716900","uid":"862a754f-9153-4439-b8ff-ecdb73dcc6fb","resourceVersion":"397","creationTimestamp":"2024-09-23T11:29:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-716900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-716900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_30_01_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4907 chars]
	I0923 11:31:15.694483    8352 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0923 11:31:15.694519    8352 node_conditions.go:123] node cpu capacity is 16
	I0923 11:31:15.694583    8352 node_conditions.go:105] duration metric: took 10.1524ms to run NodePressure ...
	I0923 11:31:15.694583    8352 start.go:241] waiting for startup goroutines ...
	I0923 11:31:15.694583    8352 start.go:246] waiting for cluster config update ...
	I0923 11:31:15.694626    8352 start.go:255] writing updated cluster config ...
	I0923 11:31:15.707541    8352 ssh_runner.go:195] Run: rm -f paused
	I0923 11:31:15.855802    8352 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 11:31:15.860393    8352 out.go:177] * Done! kubectl is now configured to use "functional-716900" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 23 11:31:01 functional-716900 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Sep 23 11:31:01 functional-716900 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	Sep 23 11:31:01 functional-716900 systemd[1]: cri-docker.service: Deactivated successfully.
	Sep 23 11:31:01 functional-716900 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Sep 23 11:31:01 functional-716900 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Sep 23 11:31:01 functional-716900 cri-dockerd[5050]: time="2024-09-23T11:31:01Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Sep 23 11:31:01 functional-716900 cri-dockerd[5050]: time="2024-09-23T11:31:01Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Sep 23 11:31:01 functional-716900 cri-dockerd[5050]: time="2024-09-23T11:31:01Z" level=info msg="Start docker client with request timeout 0s"
	Sep 23 11:31:01 functional-716900 cri-dockerd[5050]: time="2024-09-23T11:31:01Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Sep 23 11:31:01 functional-716900 cri-dockerd[5050]: time="2024-09-23T11:31:01Z" level=info msg="Loaded network plugin cni"
	Sep 23 11:31:01 functional-716900 cri-dockerd[5050]: time="2024-09-23T11:31:01Z" level=info msg="Docker cri networking managed by network plugin cni"
	Sep 23 11:31:01 functional-716900 cri-dockerd[5050]: time="2024-09-23T11:31:01Z" level=info msg="Setting cgroupDriver cgroupfs"
	Sep 23 11:31:01 functional-716900 cri-dockerd[5050]: time="2024-09-23T11:31:01Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Sep 23 11:31:01 functional-716900 cri-dockerd[5050]: time="2024-09-23T11:31:01Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Sep 23 11:31:01 functional-716900 cri-dockerd[5050]: time="2024-09-23T11:31:01Z" level=info msg="Start cri-dockerd grpc backend"
	Sep 23 11:31:01 functional-716900 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Sep 23 11:31:01 functional-716900 cri-dockerd[5050]: time="2024-09-23T11:31:01Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-bbjn9_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"db158363c024c36d0c9b0b9054bb0a514e3f3b35e84330a89d926f7b1c09a9d8\""
	Sep 23 11:31:03 functional-716900 cri-dockerd[5050]: time="2024-09-23T11:31:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b35ac3604ec8940bba93614fea9282c0a566484fbdf6fecaf8bfbadda35e7ff3/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 23 11:31:03 functional-716900 cri-dockerd[5050]: time="2024-09-23T11:31:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/65d4281fcb01739922de9d890956c6bc592c45e1535092e29bca7d1fc2ac7a1b/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 23 11:31:03 functional-716900 cri-dockerd[5050]: time="2024-09-23T11:31:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d633f5d129cd1424d8b43c55ba0775628838e9e9e2b9fdb0ec5662d8e32e82cf/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 23 11:31:03 functional-716900 cri-dockerd[5050]: time="2024-09-23T11:31:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8e0c56f858071c4131d0311385c9baa36e5f3ddbad39214da5abf133bada1262/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 23 11:31:03 functional-716900 cri-dockerd[5050]: time="2024-09-23T11:31:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b60effb74a1dec3694adc720e1d52b6f4b87507424e7a4c858c9f6c51812b2c3/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 23 11:31:03 functional-716900 cri-dockerd[5050]: time="2024-09-23T11:31:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c1395781659a5715352656841b0fd61e886452e14f0f5924296af57d32d48515/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 23 11:31:03 functional-716900 cri-dockerd[5050]: time="2024-09-23T11:31:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7cc13ccb04cc35b62cf19dd5bcbd9e8804e9d8ca13c2067cd8dc513a45bcd43b/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 23 11:31:11 functional-716900 dockerd[4681]: time="2024-09-23T11:31:11.008181381Z" level=info msg="ignoring event" container=8f60a057382f1680555fc3418a0b59594507a9f5640b6a9fe856a6c178c841c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	cfb8684300921       6e38f40d628db       10 seconds ago       Running             storage-provisioner       2                   b60effb74a1de       storage-provisioner
	5924a8578faba       c69fa2e9cbf5f       31 seconds ago       Running             coredns                   1                   7cc13ccb04cc3       coredns-7c65d6cfc9-bbjn9
	9e1e0e1aef5e4       9aa1fad941575       32 seconds ago       Running             kube-scheduler            1                   c1395781659a5       kube-scheduler-functional-716900
	8f60a057382f1       6e38f40d628db       32 seconds ago       Exited              storage-provisioner       1                   b60effb74a1de       storage-provisioner
	03b352c93f34b       60c005f310ff3       32 seconds ago       Running             kube-proxy                1                   8e0c56f858071       kube-proxy-bbml6
	54903a976f502       2e96e5913fc06       32 seconds ago       Running             etcd                      1                   d633f5d129cd1       etcd-functional-716900
	2014d25b0bdc6       6bab7719df100       32 seconds ago       Running             kube-apiserver            1                   65d4281fcb017       kube-apiserver-functional-716900
	9e71991581799       175ffd71cce3d       32 seconds ago       Running             kube-controller-manager   1                   b35ac3604ec89       kube-controller-manager-functional-716900
	9fe0d4711d58a       c69fa2e9cbf5f       About a minute ago   Exited              coredns                   0                   db158363c024c       coredns-7c65d6cfc9-bbjn9
	7bcdbe1294b73       60c005f310ff3       About a minute ago   Exited              kube-proxy                0                   bf3b161badb8e       kube-proxy-bbml6
	1a44c6fdb4251       9aa1fad941575       About a minute ago   Exited              kube-scheduler            0                   3dd2e4c1934bf       kube-scheduler-functional-716900
	35323503c8ba5       6bab7719df100       About a minute ago   Exited              kube-apiserver            0                   8cc10b2291394       kube-apiserver-functional-716900
	5e66a15fdc581       175ffd71cce3d       About a minute ago   Exited              kube-controller-manager   0                   e80900d48a067       kube-controller-manager-functional-716900
	46919a62ac2e3       2e96e5913fc06       About a minute ago   Exited              etcd                      0                   cc657eeb35ceb       etcd-functional-716900
	
	
	==> coredns [5924a8578fab] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44830 - 62834 "HINFO IN 71189109601200033.6686428758700792429. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.107726757s
	
	
	==> coredns [9fe0d4711d58] <==
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[826968026]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 11:30:10.031) (total time: 21023ms):
	Trace[826968026]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21023ms (11:30:31.054)
	Trace[826968026]: [21.023898169s] [21.023898169s] END
	[INFO] plugin/kubernetes: Trace[1403990880]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 11:30:10.031) (total time: 21023ms):
	Trace[1403990880]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21023ms (11:30:31.054)
	Trace[1403990880]: [21.02380306s] [21.02380306s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[212126329]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 11:30:10.030) (total time: 21024ms):
	Trace[212126329]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21024ms (11:30:31.054)
	Trace[212126329]: [21.024903968s] [21.024903968s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:40555 - 37973 "HINFO IN 4612850248925225937.7731556892679074141. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.082350905s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-716900
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-716900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=functional-716900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T11_30_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 11:29:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-716900
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 11:31:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 11:31:33 +0000   Mon, 23 Sep 2024 11:29:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 11:31:33 +0000   Mon, 23 Sep 2024 11:29:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 11:31:33 +0000   Mon, 23 Sep 2024 11:29:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 11:31:33 +0000   Mon, 23 Sep 2024 11:29:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-716900
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868688Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868688Ki
	  pods:               110
	System Info:
	  Machine ID:                 5dfec7006ff54a4d9199b170878630a4
	  System UUID:                5dfec7006ff54a4d9199b170878630a4
	  Boot ID:                    39082465-ae0b-4792-bc81-a99f7997c7d1
	  Kernel Version:             5.15.153.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-bbjn9                     100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     89s
	  kube-system                 etcd-functional-716900                       100m (0%)     0 (0%)      100Mi (0%)       0 (0%)         94s
	  kube-system                 kube-apiserver-functional-716900             250m (1%)     0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-controller-manager-functional-716900    200m (1%)     0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-proxy-bbml6                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-scheduler-functional-716900             100m (0%)     0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                             Age   From             Message
	  ----     ------                             ----  ----             -------
	  Normal   Starting                           85s   kube-proxy       
	  Normal   Starting                           23s   kube-proxy       
	  Warning  PossibleMemoryBackedVolumesOnDisk  95s   kubelet          The tmpfs noswap option is not supported. Memory-backed volumes (e.g. secrets, emptyDirs, etc.) might be swapped to disk and should no longer be considered secure.
	  Normal   Starting                           95s   kubelet          Starting kubelet.
	  Warning  CgroupV1                           95s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced            94s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory            94s   kubelet          Node functional-716900 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure              94s   kubelet          Node functional-716900 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID               94s   kubelet          Node functional-716900 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode                     91s   node-controller  Node functional-716900 event: Registered Node functional-716900 in Controller
	  Warning  ContainerGCFailed                  35s   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	  Normal   RegisteredNode                     21s   node-controller  Node functional-716900 event: Registered Node functional-716900 in Controller
	
	
	==> dmesg <==
	[  +0.001288] FS-Cache: N-cookie c=00000011 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001302] FS-Cache: N-cookie d=00000000b0d5c2d6{9P.session} n=00000000b670bc28
	[  +0.001551] FS-Cache: N-key=[10] '34323934393337393534'
	[  +0.011137] WSL (2) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002189] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.002737] WSL (1) ERROR: ConfigMountFsTab:2589: Processing fstab with mount -a failed.
	[  +0.003596] WSL (1) ERROR: ConfigApplyWindowsLibPath:2537: open /etc/ld.so.conf.d/ld.wsl.conf
	[  +0.000003]  failed 2
	[  +0.007837] WSL (3) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002428] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.004751] WSL (4) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002443] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.077675] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.098047] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +0.909307] netlink: 'init': attribute type 4 has an invalid length.
	[Sep23 11:09] tmpfs: Unknown parameter 'noswap'
	[  +9.754246] tmpfs: Unknown parameter 'noswap'
	[Sep23 11:28] tmpfs: Unknown parameter 'noswap'
	[  +9.514323] tmpfs: Unknown parameter 'noswap'
	[ +14.487860] tmpfs: Unknown parameter 'noswap'
	[Sep23 11:29] tmpfs: Unknown parameter 'noswap'
	[Sep23 11:30] tmpfs: Unknown parameter 'noswap'
	
	
	==> etcd [46919a62ac2e] <==
	{"level":"info","ts":"2024-09-23T11:29:54.144676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-23T11:29:54.144684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-23T11:29:54.203592Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-716900 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T11:29:54.203697Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:29:54.204016Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:29:54.204272Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:29:54.204877Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T11:29:54.205237Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T11:29:54.206548Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:29:54.207268Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:29:54.207472Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:29:54.207508Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:29:54.207632Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:29:54.208578Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-23T11:29:54.208636Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T11:30:47.994862Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-23T11:30:47.995053Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-716900","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-09-23T11:30:47.995163Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T11:30:47.995859Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T11:30:48.112452Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T11:30:48.112648Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-23T11:30:48.112876Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-23T11:30:48.402527Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-23T11:30:48.403016Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-23T11:30:48.403235Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-716900","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [54903a976f50] <==
	{"level":"info","ts":"2024-09-23T11:31:05.795970Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:31:07.002895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-23T11:31:07.003055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-23T11:31:07.003088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-23T11:31:07.003104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-23T11:31:07.003112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-23T11:31:07.003123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-09-23T11:31:07.003132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-23T11:31:07.010430Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-716900 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T11:31:07.010529Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:31:07.010824Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:31:07.011776Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T11:31:07.011991Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T11:31:07.015588Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:31:07.016610Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:31:07.017813Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T11:31:07.017911Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-23T11:31:10.996693Z","caller":"traceutil/trace.go:171","msg":"trace[1055261985] linearizableReadLoop","detail":"{readStateIndex:461; appliedIndex:460; }","duration":"102.230828ms","start":"2024-09-23T11:31:10.894447Z","end":"2024-09-23T11:31:10.996678Z","steps":["trace[1055261985] 'read index received'  (duration: 97.913413ms)","trace[1055261985] 'applied index is now lower than readState.Index'  (duration: 4.316815ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T11:31:10.996868Z","caller":"traceutil/trace.go:171","msg":"trace[1870657116] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"104.146512ms","start":"2024-09-23T11:31:10.892676Z","end":"2024-09-23T11:31:10.996823Z","steps":["trace[1870657116] 'process raft request'  (duration: 99.596875ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:31:10.997292Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.842587ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T11:31:10.997522Z","caller":"traceutil/trace.go:171","msg":"trace[2028131231] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:0; response_revision:441; }","duration":"103.109313ms","start":"2024-09-23T11:31:10.894402Z","end":"2024-09-23T11:31:10.997511Z","steps":["trace[2028131231] 'agreement among raft nodes before linearized reading'  (duration: 102.493353ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T11:31:11.422714Z","caller":"traceutil/trace.go:171","msg":"trace[1836301533] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"108.565038ms","start":"2024-09-23T11:31:11.314124Z","end":"2024-09-23T11:31:11.422689Z","steps":["trace[1836301533] 'process raft request'  (duration: 84.294704ms)","trace[1836301533] 'compare'  (duration: 23.957703ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T11:31:11.422951Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.058689ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-functional-716900\" ","response":"range_response_count:1 size:7079"}
	{"level":"info","ts":"2024-09-23T11:31:11.422979Z","caller":"traceutil/trace.go:171","msg":"trace[1145704715] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-functional-716900; range_end:; response_count:1; response_revision:450; }","duration":"108.096093ms","start":"2024-09-23T11:31:11.314875Z","end":"2024-09-23T11:31:11.422971Z","steps":["trace[1145704715] 'agreement among raft nodes before linearized reading'  (duration: 107.999883ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T11:31:11.422833Z","caller":"traceutil/trace.go:171","msg":"trace[1274645210] linearizableReadLoop","detail":"{readStateIndex:469; appliedIndex:468; }","duration":"107.922576ms","start":"2024-09-23T11:31:11.314878Z","end":"2024-09-23T11:31:11.422801Z","steps":["trace[1274645210] 'read index received'  (duration: 83.359615ms)","trace[1274645210] 'applied index is now lower than readState.Index'  (duration: 24.560061ms)"],"step_count":2}
	
	
	==> kernel <==
	 11:31:35 up 32 min,  0 users,  load average: 1.50, 1.56, 1.28
	Linux functional-716900 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [2014d25b0bdc] <==
	I0923 11:31:10.520963       1 controller.go:78] Starting OpenAPI AggregationController
	I0923 11:31:10.521768       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0923 11:31:10.520469       1 local_available_controller.go:156] Starting LocalAvailability controller
	I0923 11:31:10.522789       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I0923 11:31:10.693252       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0923 11:31:10.694016       1 shared_informer.go:320] Caches are synced for configmaps
	I0923 11:31:10.694219       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0923 11:31:10.698250       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0923 11:31:10.698433       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0923 11:31:10.698967       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0923 11:31:10.699021       1 aggregator.go:171] initial CRD sync complete...
	I0923 11:31:10.699028       1 autoregister_controller.go:144] Starting autoregister controller
	I0923 11:31:10.699034       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0923 11:31:10.699039       1 cache.go:39] Caches are synced for autoregister controller
	I0923 11:31:10.699174       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0923 11:31:10.699191       1 policy_source.go:224] refreshing policies
	I0923 11:31:10.699742       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 11:31:10.791810       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0923 11:31:10.806584       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0923 11:31:10.891960       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0923 11:31:10.892027       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E0923 11:31:10.998100       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0923 11:31:11.592574       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0923 11:31:15.098290       1 controller.go:615] quota admission added evaluator for: endpoints
	I0923 11:31:15.198876       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [35323503c8ba] <==
	W0923 11:30:57.206833       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:30:57.209503       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:30:57.211102       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:30:57.227811       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:30:57.251841       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:30:57.262685       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:30:57.312509       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:30:57.331970       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:30:57.345710       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:30:57.356422       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:30:57.382902       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:30:57.468957       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:30:57.473675       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:30:57.478543       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:30:57.503520       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:30:57.571154       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:30:57.640296       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:30:57.721997       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:30:57.731026       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:30:57.784301       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:30:57.827365       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:30:57.846364       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:30:57.895903       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:30:57.972887       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:30:57.981239       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [5e66a15fdc58] <==
	I0923 11:30:05.129648       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-716900"
	I0923 11:30:05.134394       1 shared_informer.go:320] Caches are synced for expand
	I0923 11:30:05.182223       1 shared_informer.go:320] Caches are synced for resource quota
	I0923 11:30:05.215515       1 shared_informer.go:320] Caches are synced for resource quota
	I0923 11:30:05.221167       1 shared_informer.go:320] Caches are synced for HPA
	I0923 11:30:05.337209       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-716900"
	I0923 11:30:05.604192       1 shared_informer.go:320] Caches are synced for garbage collector
	I0923 11:30:05.667065       1 shared_informer.go:320] Caches are synced for garbage collector
	I0923 11:30:05.667163       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0923 11:30:06.130639       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="415.966116ms"
	I0923 11:30:06.198799       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="67.941034ms"
	I0923 11:30:06.199094       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="64.806µs"
	I0923 11:30:06.222081       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="38.604µs"
	I0923 11:30:08.600018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="185.280292ms"
	I0923 11:30:08.625545       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="25.408381ms"
	I0923 11:30:08.625756       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="71.407µs"
	I0923 11:30:08.625968       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="182.118µs"
	I0923 11:30:10.523022       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="195.519µs"
	I0923 11:30:10.698535       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="114.511µs"
	I0923 11:30:11.660118       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-716900"
	I0923 11:30:20.643838       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="162.318µs"
	I0923 11:30:20.985454       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="72.308µs"
	I0923 11:30:20.991415       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="34.104µs"
	I0923 11:30:40.673982       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="16.164214ms"
	I0923 11:30:40.674198       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="60.905µs"
	
	
	==> kube-controller-manager [9e7199158179] <==
	I0923 11:31:14.792642       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0923 11:31:14.793018       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0923 11:31:14.793441       1 shared_informer.go:320] Caches are synced for HPA
	I0923 11:31:14.793487       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0923 11:31:14.793787       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0923 11:31:14.794878       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0923 11:31:14.798322       1 shared_informer.go:320] Caches are synced for disruption
	I0923 11:31:14.800070       1 shared_informer.go:320] Caches are synced for GC
	I0923 11:31:14.801662       1 shared_informer.go:320] Caches are synced for stateful set
	I0923 11:31:14.804989       1 shared_informer.go:320] Caches are synced for daemon sets
	I0923 11:31:14.808132       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0923 11:31:14.858120       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0923 11:31:14.895356       1 shared_informer.go:320] Caches are synced for deployment
	I0923 11:31:14.896341       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0923 11:31:14.993096       1 shared_informer.go:320] Caches are synced for cronjob
	I0923 11:31:15.005603       1 shared_informer.go:320] Caches are synced for resource quota
	I0923 11:31:15.009302       1 shared_informer.go:320] Caches are synced for resource quota
	I0923 11:31:15.160526       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="367.835864ms"
	I0923 11:31:15.160781       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="57.505µs"
	I0923 11:31:15.444326       1 shared_informer.go:320] Caches are synced for garbage collector
	I0923 11:31:15.444455       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0923 11:31:15.459643       1 shared_informer.go:320] Caches are synced for garbage collector
	I0923 11:31:20.697250       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="35.297777ms"
	I0923 11:31:20.697515       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.004µs"
	I0923 11:31:33.373287       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-716900"
	
	
	==> kube-proxy [03b352c93f34] <==
	E0923 11:31:05.692304       1 metrics.go:340] "failed to initialize nfacct client" err="nfacct sub-system not available"
	E0923 11:31:05.792464       1 metrics.go:340] "failed to initialize nfacct client" err="nfacct sub-system not available"
	I0923 11:31:05.902582       1 server_linux.go:66] "Using iptables proxy"
	E0923 11:31:10.796695       1 server.go:666] "Failed to retrieve node info" err="nodes \"functional-716900\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]"
	I0923 11:31:12.006959       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 11:31:12.007288       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 11:31:12.217660       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 11:31:12.217922       1 server_linux.go:169] "Using iptables Proxier"
	I0923 11:31:12.294867       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	E0923 11:31:12.315899       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv4"
	E0923 11:31:12.338618       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv6"
	I0923 11:31:12.339164       1 server.go:483] "Version info" version="v1.31.1"
	I0923 11:31:12.339191       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:31:12.340756       1 config.go:199] "Starting service config controller"
	I0923 11:31:12.340788       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 11:31:12.340811       1 config.go:105] "Starting endpoint slice config controller"
	I0923 11:31:12.340816       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 11:31:12.392091       1 config.go:328] "Starting node config controller"
	I0923 11:31:12.392152       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 11:31:12.491789       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 11:31:12.492053       1 shared_informer.go:320] Caches are synced for service config
	I0923 11:31:12.492438       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [7bcdbe1294b7] <==
	E0923 11:30:09.536262       1 metrics.go:340] "failed to initialize nfacct client" err="nfacct sub-system not available"
	E0923 11:30:09.557922       1 metrics.go:340] "failed to initialize nfacct client" err="nfacct sub-system not available"
	I0923 11:30:09.577941       1 server_linux.go:66] "Using iptables proxy"
	I0923 11:30:10.013442       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 11:30:10.013655       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 11:30:10.132234       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 11:30:10.132371       1 server_linux.go:169] "Using iptables Proxier"
	I0923 11:30:10.136635       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	E0923 11:30:10.157832       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv4"
	E0923 11:30:10.176741       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv6"
	I0923 11:30:10.177100       1 server.go:483] "Version info" version="v1.31.1"
	I0923 11:30:10.177120       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:30:10.179265       1 config.go:328] "Starting node config controller"
	I0923 11:30:10.179365       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 11:30:10.180793       1 config.go:105] "Starting endpoint slice config controller"
	I0923 11:30:10.181001       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 11:30:10.181367       1 config.go:199] "Starting service config controller"
	I0923 11:30:10.181487       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 11:30:10.280600       1 shared_informer.go:320] Caches are synced for node config
	I0923 11:30:10.281690       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 11:30:10.297173       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [1a44c6fdb425] <==
	W0923 11:29:58.552032       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 11:29:58.552145       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 11:29:58.677110       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 11:29:58.677251       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:29:58.720000       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 11:29:58.720161       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 11:29:58.798009       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 11:29:58.798159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:29:58.830116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 11:29:58.830223       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:29:58.837018       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 11:29:58.837200       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 11:29:58.846841       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 11:29:58.846958       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:29:58.873573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 11:29:58.873677       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:29:58.913775       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 11:29:58.913996       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:29:58.941745       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 11:29:58.941849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0923 11:30:01.535773       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 11:30:48.001462       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0923 11:30:48.001746       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0923 11:30:48.001949       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0923 11:30:48.002877       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [9e1e0e1aef5e] <==
	I0923 11:31:07.117008       1 serving.go:386] Generated self-signed cert in-memory
	I0923 11:31:10.996790       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0923 11:31:10.996908       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:31:11.010760       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0923 11:31:11.010833       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0923 11:31:11.010987       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0923 11:31:11.011014       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0923 11:31:11.011518       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0923 11:31:11.011800       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 11:31:11.092102       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0923 11:31:11.094262       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0923 11:31:11.192394       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0923 11:31:11.192790       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0923 11:31:11.193263       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 11:31:01 functional-716900 kubelet[2585]: I0923 11:31:01.803515    2585 status_manager.go:851] "Failed to get status for pod" podUID="70bcdf2a7b9a7c02f733e0d63ef8fea8" pod="kube-system/kube-scheduler-functional-716900" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-716900\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 23 11:31:01 functional-716900 kubelet[2585]: I0923 11:31:01.804553    2585 status_manager.go:851] "Failed to get status for pod" podUID="509ccef8-076a-4237-9930-f6a9219c6c05" pod="kube-system/kube-proxy-bbml6" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-bbml6\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 23 11:31:01 functional-716900 kubelet[2585]: I0923 11:31:01.805438    2585 status_manager.go:851] "Failed to get status for pod" podUID="183b09f8-493d-4c13-8105-8c6da1c032eb" pod="kube-system/coredns-7c65d6cfc9-bbjn9" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bbjn9\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 23 11:31:01 functional-716900 kubelet[2585]: I0923 11:31:01.806287    2585 status_manager.go:851] "Failed to get status for pod" podUID="acd58970-823c-4772-9860-2c7fa16de877" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 23 11:31:01 functional-716900 kubelet[2585]: I0923 11:31:01.806828    2585 status_manager.go:851] "Failed to get status for pod" podUID="3e0a6d91e619435c748061b92b56b32c" pod="kube-system/etcd-functional-716900" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-716900\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 23 11:31:01 functional-716900 kubelet[2585]: I0923 11:31:01.807326    2585 status_manager.go:851] "Failed to get status for pod" podUID="0e11769b158aee9d331c36a8821a5dbe" pod="kube-system/kube-controller-manager-functional-716900" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-716900\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 23 11:31:03 functional-716900 kubelet[2585]: I0923 11:31:03.624686    2585 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e0c56f858071c4131d0311385c9baa36e5f3ddbad39214da5abf133bada1262"
	Sep 23 11:31:04 functional-716900 kubelet[2585]: I0923 11:31:04.012364    2585 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cc13ccb04cc35b62cf19dd5bcbd9e8804e9d8ca13c2067cd8dc513a45bcd43b"
	Sep 23 11:31:04 functional-716900 kubelet[2585]: I0923 11:31:04.107145    2585 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b60effb74a1dec3694adc720e1d52b6f4b87507424e7a4c858c9f6c51812b2c3"
	Sep 23 11:31:04 functional-716900 kubelet[2585]: E0923 11:31:04.503631    2585 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-716900?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="6.4s"
	Sep 23 11:31:05 functional-716900 kubelet[2585]: I0923 11:31:04.999958    2585 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b35ac3604ec8940bba93614fea9282c0a566484fbdf6fecaf8bfbadda35e7ff3"
	Sep 23 11:31:05 functional-716900 kubelet[2585]: I0923 11:31:05.195992    2585 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65d4281fcb01739922de9d890956c6bc592c45e1535092e29bca7d1fc2ac7a1b"
	Sep 23 11:31:05 functional-716900 kubelet[2585]: I0923 11:31:05.197106    2585 status_manager.go:851] "Failed to get status for pod" podUID="509ccef8-076a-4237-9930-f6a9219c6c05" pod="kube-system/kube-proxy-bbml6" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-bbml6\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 23 11:31:05 functional-716900 kubelet[2585]: I0923 11:31:05.198579    2585 status_manager.go:851] "Failed to get status for pod" podUID="183b09f8-493d-4c13-8105-8c6da1c032eb" pod="kube-system/coredns-7c65d6cfc9-bbjn9" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bbjn9\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 23 11:31:05 functional-716900 kubelet[2585]: I0923 11:31:05.199557    2585 status_manager.go:851] "Failed to get status for pod" podUID="acd58970-823c-4772-9860-2c7fa16de877" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 23 11:31:05 functional-716900 kubelet[2585]: I0923 11:31:05.201320    2585 status_manager.go:851] "Failed to get status for pod" podUID="3e0a6d91e619435c748061b92b56b32c" pod="kube-system/etcd-functional-716900" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-716900\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 23 11:31:05 functional-716900 kubelet[2585]: I0923 11:31:05.201989    2585 status_manager.go:851] "Failed to get status for pod" podUID="0e11769b158aee9d331c36a8821a5dbe" pod="kube-system/kube-controller-manager-functional-716900" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-716900\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 23 11:31:05 functional-716900 kubelet[2585]: I0923 11:31:05.202730    2585 status_manager.go:851] "Failed to get status for pod" podUID="77419c0fead389d47491ffc9360a1dde" pod="kube-system/kube-apiserver-functional-716900" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-716900\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 23 11:31:05 functional-716900 kubelet[2585]: I0923 11:31:05.203790    2585 status_manager.go:851] "Failed to get status for pod" podUID="70bcdf2a7b9a7c02f733e0d63ef8fea8" pod="kube-system/kube-scheduler-functional-716900" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-716900\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 23 11:31:05 functional-716900 kubelet[2585]: I0923 11:31:05.800736    2585 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1395781659a5715352656841b0fd61e886452e14f0f5924296af57d32d48515"
	Sep 23 11:31:05 functional-716900 kubelet[2585]: I0923 11:31:05.995056    2585 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d633f5d129cd1424d8b43c55ba0775628838e9e9e2b9fdb0ec5662d8e32e82cf"
	Sep 23 11:31:12 functional-716900 kubelet[2585]: I0923 11:31:12.111873    2585 scope.go:117] "RemoveContainer" containerID="b64e563da9994c307f40b0fd7df57ca1cfd29bef054a49c515a0d27b09660e80"
	Sep 23 11:31:12 functional-716900 kubelet[2585]: I0923 11:31:12.112467    2585 scope.go:117] "RemoveContainer" containerID="8f60a057382f1680555fc3418a0b59594507a9f5640b6a9fe856a6c178c841c9"
	Sep 23 11:31:12 functional-716900 kubelet[2585]: E0923 11:31:12.112718    2585 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(acd58970-823c-4772-9860-2c7fa16de877)\"" pod="kube-system/storage-provisioner" podUID="acd58970-823c-4772-9860-2c7fa16de877"
	Sep 23 11:31:25 functional-716900 kubelet[2585]: I0923 11:31:25.820016    2585 scope.go:117] "RemoveContainer" containerID="8f60a057382f1680555fc3418a0b59594507a9f5640b6a9fe856a6c178c841c9"
	
	
	==> storage-provisioner [8f60a057382f] <==
	I0923 11:31:05.598188       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0923 11:31:10.795558       1 main.go:39] error getting server version: unknown
	
	
	==> storage-provisioner [cfb868430092] <==
	I0923 11:31:26.218870       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 11:31:26.240276       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 11:31:26.240497       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-716900 -n functional-716900
helpers_test.go:261: (dbg) Run:  kubectl --context functional-716900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (5.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (417.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-694600 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0
E0923 12:38:04.975057   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p old-k8s-version-694600 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0: exit status 102 (6m49.8738538s)

                                                
                                                
-- stdout --
	* [old-k8s-version-694600] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-694600" primary control-plane node in "old-k8s-version-694600" cluster
	* Pulling base image v0.0.45-1726784731-19672 ...
	* Restarting existing docker container for "old-k8s-version-694600" ...
	* Preparing Kubernetes v1.20.0 on Docker 27.3.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-694600 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:37:50.065673    4228 out.go:345] Setting OutFile to fd 1284 ...
	I0923 12:37:50.166700    4228 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:37:50.166700    4228 out.go:358] Setting ErrFile to fd 1892...
	I0923 12:37:50.166700    4228 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:37:50.195096    4228 out.go:352] Setting JSON to false
	I0923 12:37:50.199481    4228 start.go:129] hostinfo: {"hostname":"minikube2","uptime":5937,"bootTime":1727089132,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0923 12:37:50.200105    4228 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 12:37:50.203583    4228 out.go:177] * [old-k8s-version-694600] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 12:37:50.208091    4228 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0923 12:37:50.208146    4228 notify.go:220] Checking for updates...
	I0923 12:37:50.212313    4228 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:37:50.215244    4228 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0923 12:37:50.220567    4228 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 12:37:50.223348    4228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:37:50.227322    4228 config.go:182] Loaded profile config "old-k8s-version-694600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0923 12:37:50.230731    4228 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0923 12:37:50.233622    4228 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:37:50.462999    4228 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0923 12:37:50.472294    4228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 12:37:50.912562    4228 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:98 OomKillDisable:true NGoroutines:92 SystemTime:2024-09-23 12:37:50.87479325 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaV
ersion:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://
github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 12:37:50.916523    4228 out.go:177] * Using the docker driver based on existing profile
	I0923 12:37:50.920281    4228 start.go:297] selected driver: docker
	I0923 12:37:50.920347    4228 start.go:901] validating driver "docker" against &{Name:old-k8s-version-694600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-694600 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:37:50.920573    4228 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:37:51.004956    4228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 12:37:51.403332    4228 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:98 OomKillDisable:true NGoroutines:92 SystemTime:2024-09-23 12:37:51.358222752 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 12:37:51.404447    4228 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:37:51.404641    4228 cni.go:84] Creating CNI manager for ""
	I0923 12:37:51.404641    4228 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 12:37:51.405018    4228 start.go:340] cluster config:
	{Name:old-k8s-version-694600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-694600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:37:51.412479    4228 out.go:177] * Starting "old-k8s-version-694600" primary control-plane node in "old-k8s-version-694600" cluster
	I0923 12:37:51.414985    4228 cache.go:121] Beginning downloading kic base image for docker with docker
	I0923 12:37:51.419178    4228 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 12:37:51.421746    4228 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 12:37:51.421746    4228 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 12:37:51.421746    4228 preload.go:146] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0923 12:37:51.421746    4228 cache.go:56] Caching tarball of preloaded images
	I0923 12:37:51.423495    4228 preload.go:172] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 12:37:51.423495    4228 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0923 12:37:51.424562    4228 profile.go:143] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-694600\config.json ...
	I0923 12:37:51.569125    4228 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon, skipping pull
	I0923 12:37:51.569125    4228 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in daemon, skipping load
	I0923 12:37:51.569125    4228 cache.go:194] Successfully downloaded all kic artifacts
	I0923 12:37:51.569125    4228 start.go:360] acquireMachinesLock for old-k8s-version-694600: {Name:mk60e57ef64ed7ba55c5b0fe4d3c9f2a9be93adf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:37:51.569125    4228 start.go:364] duration metric: took 0s to acquireMachinesLock for "old-k8s-version-694600"
	I0923 12:37:51.569821    4228 start.go:96] Skipping create...Using existing machine configuration
	I0923 12:37:51.569821    4228 fix.go:54] fixHost starting: 
	I0923 12:37:51.593465    4228 cli_runner.go:164] Run: docker container inspect old-k8s-version-694600 --format={{.State.Status}}
	I0923 12:37:51.693218    4228 fix.go:112] recreateIfNeeded on old-k8s-version-694600: state=Stopped err=<nil>
	W0923 12:37:51.693218    4228 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 12:37:51.697947    4228 out.go:177] * Restarting existing docker container for "old-k8s-version-694600" ...
	I0923 12:37:51.714841    4228 cli_runner.go:164] Run: docker start old-k8s-version-694600
	I0923 12:37:52.504008    4228 cli_runner.go:164] Run: docker container inspect old-k8s-version-694600 --format={{.State.Status}}
	I0923 12:37:52.607998    4228 kic.go:430] container "old-k8s-version-694600" state is running.
	I0923 12:37:52.621325    4228 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-694600
	I0923 12:37:52.725001    4228 profile.go:143] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-694600\config.json ...
	I0923 12:37:52.728358    4228 machine.go:93] provisionDockerMachine start ...
	I0923 12:37:52.744082    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-694600
	I0923 12:37:52.868525    4228 main.go:141] libmachine: Using SSH client type: native
	I0923 12:37:52.869177    4228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x551bc0] 0x554700 <nil>  [] 0s} 127.0.0.1 60038 <nil> <nil>}
	I0923 12:37:52.869251    4228 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 12:37:52.872773    4228 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0923 12:37:56.105037    4228 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-694600
	
	I0923 12:37:56.105195    4228 ubuntu.go:169] provisioning hostname "old-k8s-version-694600"
	I0923 12:37:56.117882    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-694600
	I0923 12:37:56.219761    4228 main.go:141] libmachine: Using SSH client type: native
	I0923 12:37:56.220418    4228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x551bc0] 0x554700 <nil>  [] 0s} 127.0.0.1 60038 <nil> <nil>}
	I0923 12:37:56.220418    4228 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-694600 && echo "old-k8s-version-694600" | sudo tee /etc/hostname
	I0923 12:37:56.456853    4228 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-694600
	
	I0923 12:37:56.471002    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-694600
	I0923 12:37:56.567799    4228 main.go:141] libmachine: Using SSH client type: native
	I0923 12:37:56.568544    4228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x551bc0] 0x554700 <nil>  [] 0s} 127.0.0.1 60038 <nil> <nil>}
	I0923 12:37:56.568606    4228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-694600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-694600/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-694600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:37:56.762996    4228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:37:56.763068    4228 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube2\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube2\minikube-integration\.minikube}
	I0923 12:37:56.763179    4228 ubuntu.go:177] setting up certificates
	I0923 12:37:56.763216    4228 provision.go:84] configureAuth start
	I0923 12:37:56.774237    4228 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-694600
	I0923 12:37:56.862198    4228 provision.go:143] copyHostCerts
	I0923 12:37:56.862839    4228 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem, removing ...
	I0923 12:37:56.862893    4228 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.pem
	I0923 12:37:56.863372    4228 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0923 12:37:56.864568    4228 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem, removing ...
	I0923 12:37:56.864568    4228 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cert.pem
	I0923 12:37:56.864568    4228 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 12:37:56.866150    4228 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem, removing ...
	I0923 12:37:56.866150    4228 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\key.pem
	I0923 12:37:56.866150    4228 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem (1675 bytes)
	I0923 12:37:56.867419    4228 provision.go:117] generating server cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.old-k8s-version-694600 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-694600]
	I0923 12:37:57.002619    4228 provision.go:177] copyRemoteCerts
	I0923 12:37:57.022455    4228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:37:57.032808    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-694600
	I0923 12:37:57.126749    4228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60038 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\old-k8s-version-694600\id_rsa Username:docker}
	I0923 12:37:57.284571    4228 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 12:37:57.349962    4228 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
	I0923 12:37:57.417035    4228 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 12:37:57.466324    4228 provision.go:87] duration metric: took 703.103ms to configureAuth
	I0923 12:37:57.466324    4228 ubuntu.go:193] setting minikube options for container-runtime
	I0923 12:37:57.476234    4228 config.go:182] Loaded profile config "old-k8s-version-694600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0923 12:37:57.486438    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-694600
	I0923 12:37:57.574962    4228 main.go:141] libmachine: Using SSH client type: native
	I0923 12:37:57.575413    4228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x551bc0] 0x554700 <nil>  [] 0s} 127.0.0.1 60038 <nil> <nil>}
	I0923 12:37:57.575522    4228 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 12:37:57.774104    4228 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0923 12:37:57.774215    4228 ubuntu.go:71] root file system type: overlay
	I0923 12:37:57.774421    4228 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 12:37:57.784752    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-694600
	I0923 12:37:57.880458    4228 main.go:141] libmachine: Using SSH client type: native
	I0923 12:37:57.881020    4228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x551bc0] 0x554700 <nil>  [] 0s} 127.0.0.1 60038 <nil> <nil>}
	I0923 12:37:57.881020    4228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 12:37:58.116243    4228 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 12:37:58.129971    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-694600
	I0923 12:37:58.225949    4228 main.go:141] libmachine: Using SSH client type: native
	I0923 12:37:58.226493    4228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x551bc0] 0x554700 <nil>  [] 0s} 127.0.0.1 60038 <nil> <nil>}
	I0923 12:37:58.226605    4228 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 12:37:58.458662    4228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:37:58.458662    4228 machine.go:96] duration metric: took 5.7299905s to provisionDockerMachine
	I0923 12:37:58.458662    4228 start.go:293] postStartSetup for "old-k8s-version-694600" (driver="docker")
	I0923 12:37:58.458759    4228 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:37:58.476391    4228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:37:58.489377    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-694600
	I0923 12:37:58.575443    4228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60038 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\old-k8s-version-694600\id_rsa Username:docker}
	I0923 12:37:58.770520    4228 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:37:58.781661    4228 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 12:37:58.781661    4228 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 12:37:58.781661    4228 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 12:37:58.781817    4228 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 12:37:58.781857    4228 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\addons for local assets ...
	I0923 12:37:58.782385    4228 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\files for local assets ...
	I0923 12:37:58.783909    4228 filesync.go:149] local asset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\132002.pem -> 132002.pem in /etc/ssl/certs
	I0923 12:37:58.806194    4228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 12:37:58.832638    4228 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\132002.pem --> /etc/ssl/certs/132002.pem (1708 bytes)
	I0923 12:37:58.878327    4228 start.go:296] duration metric: took 419.6619ms for postStartSetup
	I0923 12:37:58.900436    4228 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 12:37:58.913302    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-694600
	I0923 12:37:58.989576    4228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60038 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\old-k8s-version-694600\id_rsa Username:docker}
	I0923 12:37:59.163235    4228 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 12:37:59.188685    4228 fix.go:56] duration metric: took 7.6188054s for fixHost
	I0923 12:37:59.188685    4228 start.go:83] releasing machines lock for "old-k8s-version-694600", held for 7.6195019s
	I0923 12:37:59.205254    4228 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-694600
	I0923 12:37:59.310345    4228 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 12:37:59.325680    4228 ssh_runner.go:195] Run: cat /version.json
	I0923 12:37:59.327268    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-694600
	I0923 12:37:59.338560    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-694600
	I0923 12:37:59.438884    4228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60038 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\old-k8s-version-694600\id_rsa Username:docker}
	I0923 12:37:59.443802    4228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60038 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\old-k8s-version-694600\id_rsa Username:docker}
	W0923 12:37:59.571513    4228 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 12:37:59.588613    4228 ssh_runner.go:195] Run: systemctl --version
	I0923 12:37:59.623444    4228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 12:37:59.657439    4228 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0923 12:37:59.683977    4228 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W0923 12:37:59.683977    4228 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	W0923 12:37:59.688566    4228 start.go:439] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0923 12:37:59.708918    4228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0923 12:37:59.765762    4228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0923 12:37:59.817705    4228 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:37:59.817705    4228 start.go:495] detecting cgroup driver to use...
	I0923 12:37:59.817705    4228 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 12:37:59.818256    4228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:37:59.867103    4228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0923 12:37:59.911814    4228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 12:37:59.939097    4228 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 12:37:59.951678    4228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 12:37:59.992297    4228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:38:00.043533    4228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 12:38:00.089099    4228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:38:00.136954    4228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:38:00.188460    4228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 12:38:00.239993    4228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:38:00.275160    4228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:38:00.326421    4228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:38:00.501070    4228 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 12:38:00.758432    4228 start.go:495] detecting cgroup driver to use...
	I0923 12:38:00.758551    4228 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 12:38:00.773747    4228 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 12:38:00.812461    4228 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0923 12:38:00.828256    4228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:38:00.860761    4228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:38:00.932775    4228 ssh_runner.go:195] Run: which cri-dockerd
	I0923 12:38:00.966018    4228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 12:38:00.976656    4228 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0923 12:38:01.061457    4228 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 12:38:01.311055    4228 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 12:38:01.501090    4228 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 12:38:01.501697    4228 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 12:38:01.560604    4228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:38:01.785734    4228 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 12:38:02.942142    4228 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.1563993s)
	I0923 12:38:02.955055    4228 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 12:38:03.033280    4228 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 12:38:03.113069    4228 out.go:235] * Preparing Kubernetes v1.20.0 on Docker 27.3.0 ...
	I0923 12:38:03.122384    4228 cli_runner.go:164] Run: docker exec -t old-k8s-version-694600 dig +short host.docker.internal
	I0923 12:38:03.350250    4228 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0923 12:38:03.363258    4228 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0923 12:38:03.378975    4228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:38:03.423457    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-694600
	I0923 12:38:03.523496    4228 kubeadm.go:883] updating cluster {Name:old-k8s-version-694600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-694600 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\je
nkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 12:38:03.523496    4228 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 12:38:03.535640    4228 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 12:38:03.594536    4228 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0923 12:38:03.594536    4228 docker.go:691] registry.k8s.io/kube-apiserver:v1.20.0 wasn't preloaded
	I0923 12:38:03.609878    4228 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 12:38:03.652610    4228 ssh_runner.go:195] Run: which lz4
	I0923 12:38:03.682853    4228 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 12:38:03.698147    4228 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 12:38:03.698481    4228 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (401930599 bytes)
	I0923 12:38:12.979114    4228 docker.go:649] duration metric: took 9.3099898s to copy over tarball
	I0923 12:38:13.002100    4228 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 12:38:19.148586    4228 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (6.1463931s)
	I0923 12:38:19.148642    4228 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 12:38:21.403767    4228 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 12:38:21.431708    4228 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2824 bytes)
	I0923 12:38:21.488781    4228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:38:21.668279    4228 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 12:38:31.259068    4228 ssh_runner.go:235] Completed: sudo systemctl restart docker: (9.5907151s)
	I0923 12:38:31.268044    4228 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 12:38:31.325654    4228 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0923 12:38:31.325654    4228 docker.go:691] registry.k8s.io/kube-apiserver:v1.20.0 wasn't preloaded
	I0923 12:38:31.325929    4228 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0923 12:38:31.342155    4228 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 12:38:31.349981    4228 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0923 12:38:31.359612    4228 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 12:38:31.359911    4228 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0923 12:38:31.368169    4228 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0923 12:38:31.374819    4228 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0923 12:38:31.379023    4228 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0923 12:38:31.382944    4228 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0923 12:38:31.387755    4228 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0923 12:38:31.396648    4228 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0923 12:38:31.398288    4228 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0923 12:38:31.399488    4228 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0923 12:38:31.409488    4228 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0923 12:38:31.412443    4228 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0923 12:38:31.427542    4228 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0923 12:38:31.435482    4228 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	W0923 12:38:31.487818    4228 image.go:188] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0923 12:38:31.573981    4228 image.go:188] authn lookup for registry.k8s.io/kube-apiserver:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0923 12:38:31.665982    4228 image.go:188] authn lookup for registry.k8s.io/coredns:1.7.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0923 12:38:31.758464    4228 image.go:188] authn lookup for registry.k8s.io/etcd:3.4.13-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0923 12:38:31.848124    4228 image.go:188] authn lookup for registry.k8s.io/kube-scheduler:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0923 12:38:31.944485    4228 image.go:188] authn lookup for registry.k8s.io/pause:3.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0923 12:38:32.037978    4228 image.go:188] authn lookup for registry.k8s.io/kube-controller-manager:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0923 12:38:32.075151    4228 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0923 12:38:32.077459    4228 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0923 12:38:32.106530    4228 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 12:38:32.107835    4228 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	W0923 12:38:32.152252    4228 image.go:188] authn lookup for registry.k8s.io/kube-proxy:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0923 12:38:32.196500    4228 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0923 12:38:32.197188    4228 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0923 12:38:32.281025    4228 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0923 12:38:32.281429    4228 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.20.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.20.0
	I0923 12:38:32.281429    4228 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0923 12:38:32.282913    4228 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0923 12:38:32.282913    4228 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.7.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.7.0
	I0923 12:38:32.282913    4228 docker.go:337] Removing image: registry.k8s.io/coredns:1.7.0
	I0923 12:38:32.303214    4228 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.7.0
	I0923 12:38:32.304630    4228 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0923 12:38:32.313892    4228 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0923 12:38:32.314062    4228 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.13-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.13-0
	I0923 12:38:32.314102    4228 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0923 12:38:32.330233    4228 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.13-0
	I0923 12:38:32.340349    4228 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0923 12:38:32.417606    4228 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0923 12:38:32.417753    4228 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.20.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.20.0
	I0923 12:38:32.417606    4228 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0923 12:38:32.417753    4228 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0923 12:38:32.417753    4228 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2
	I0923 12:38:32.417753    4228 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I0923 12:38:32.428474    4228 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0923 12:38:32.429277    4228 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0923 12:38:32.471894    4228 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0923 12:38:32.501401    4228 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.20.0
	I0923 12:38:32.502026    4228 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.7.0
	I0923 12:38:32.581389    4228 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.13-0
	I0923 12:38:32.590479    4228 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0923 12:38:32.590479    4228 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.20.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.20.0
	I0923 12:38:32.590628    4228 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0923 12:38:32.613883    4228 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0923 12:38:32.628190    4228 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.20.0
	I0923 12:38:32.628289    4228 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2
	I0923 12:38:32.697495    4228 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0923 12:38:32.697495    4228 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.20.0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.20.0
	I0923 12:38:32.697495    4228 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0923 12:38:32.713921    4228 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.20.0
	I0923 12:38:32.741707    4228 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.20.0
	I0923 12:38:32.768090    4228 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.20.0
	I0923 12:38:32.768760    4228 cache_images.go:92] duration metric: took 1.4428197s to LoadCachedImages
	W0923 12:38:32.769133    4228 out.go:270] X Unable to load cached images: LoadCachedImages: CreateFile C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.20.0: The system cannot find the file specified.
	X Unable to load cached images: LoadCachedImages: CreateFile C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.20.0: The system cannot find the file specified.
	I0923 12:38:32.769234    4228 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.20.0 docker true true} ...
	I0923 12:38:32.769606    4228 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-694600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-694600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:38:32.780548    4228 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 12:38:32.908040    4228 cni.go:84] Creating CNI manager for ""
	I0923 12:38:32.909118    4228 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 12:38:32.909118    4228 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 12:38:32.909183    4228 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-694600 NodeName:old-k8s-version-694600 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0923 12:38:32.909558    4228 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-694600"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 12:38:32.932231    4228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0923 12:38:32.957788    4228 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 12:38:32.970656    4228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 12:38:32.997481    4228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (349 bytes)
	I0923 12:38:33.049081    4228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:38:33.102549    4228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2121 bytes)
	I0923 12:38:33.158089    4228 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0923 12:38:33.173735    4228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:38:33.221708    4228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:38:33.426433    4228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:38:33.460726    4228 certs.go:68] Setting up C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-694600 for IP: 192.168.103.2
	I0923 12:38:33.460726    4228 certs.go:194] generating shared ca certs ...
	I0923 12:38:33.460726    4228 certs.go:226] acquiring lock for ca certs: {Name:mka39b35711ce17aa627001b408a7adb2f266bbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:38:33.461969    4228 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key
	I0923 12:38:33.462347    4228 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key
	I0923 12:38:33.462649    4228 certs.go:256] generating profile certs ...
	I0923 12:38:33.463541    4228 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-694600\client.key
	I0923 12:38:33.464087    4228 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-694600\apiserver.key.c11ba674
	I0923 12:38:33.464573    4228 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-694600\proxy-client.key
	I0923 12:38:33.465652    4228 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\13200.pem (1338 bytes)
	W0923 12:38:33.466200    4228 certs.go:480] ignoring C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\13200_empty.pem, impossibly tiny 0 bytes
	I0923 12:38:33.466388    4228 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0923 12:38:33.466829    4228 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0923 12:38:33.467133    4228 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0923 12:38:33.467501    4228 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0923 12:38:33.467566    4228 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\132002.pem (1708 bytes)
	I0923 12:38:33.469720    4228 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 12:38:33.529425    4228 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 12:38:33.600727    4228 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 12:38:33.690889    4228 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 12:38:33.754667    4228 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-694600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0923 12:38:33.828156    4228 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-694600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 12:38:33.913991    4228 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-694600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 12:38:34.003607    4228 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-694600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0923 12:38:34.066717    4228 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\13200.pem --> /usr/share/ca-certificates/13200.pem (1338 bytes)
	I0923 12:38:34.213736    4228 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\132002.pem --> /usr/share/ca-certificates/132002.pem (1708 bytes)
	I0923 12:38:34.273816    4228 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 12:38:34.331110    4228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 12:38:34.403752    4228 ssh_runner.go:195] Run: openssl version
	I0923 12:38:34.511043    4228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13200.pem && ln -fs /usr/share/ca-certificates/13200.pem /etc/ssl/certs/13200.pem"
	I0923 12:38:34.619303    4228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13200.pem
	I0923 12:38:34.680082    4228 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 11:29 /usr/share/ca-certificates/13200.pem
	I0923 12:38:34.703007    4228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13200.pem
	I0923 12:38:34.743219    4228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13200.pem /etc/ssl/certs/51391683.0"
	I0923 12:38:34.810496    4228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132002.pem && ln -fs /usr/share/ca-certificates/132002.pem /etc/ssl/certs/132002.pem"
	I0923 12:38:34.910618    4228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132002.pem
	I0923 12:38:34.981105    4228 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 11:29 /usr/share/ca-certificates/132002.pem
	I0923 12:38:34.997382    4228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132002.pem
	I0923 12:38:35.038335    4228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132002.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 12:38:35.112669    4228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 12:38:35.213955    4228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:38:35.285199    4228 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:09 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:38:35.312442    4228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:38:35.349064    4228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 12:38:35.417906    4228 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:38:35.513799    4228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 12:38:35.612552    4228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 12:38:35.708114    4228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 12:38:35.805448    4228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 12:38:35.843461    4228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 12:38:35.898709    4228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 12:38:35.920863    4228 kubeadm.go:392] StartCluster: {Name:old-k8s-version-694600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-694600 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenki
ns.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:38:35.933411    4228 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 12:38:36.106749    4228 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 12:38:36.183084    4228 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0923 12:38:36.183212    4228 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0923 12:38:36.207156    4228 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0923 12:38:36.292381    4228 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0923 12:38:36.305811    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-694600
	I0923 12:38:36.406481    4228 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-694600" does not appear in C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0923 12:38:36.409732    4228 kubeconfig.go:62] C:\Users\jenkins.minikube2\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-694600" cluster setting kubeconfig missing "old-k8s-version-694600" context setting]
	I0923 12:38:36.412826    4228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\kubeconfig: {Name:mk7e72b8b9c82f9d87d6aed6af6962a1c1fa489d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:38:36.449830    4228 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0923 12:38:36.587302    4228 kubeadm.go:630] The running cluster does not require reconfiguration: 127.0.0.1
	I0923 12:38:36.587487    4228 kubeadm.go:597] duration metric: took 404.2722ms to restartPrimaryControlPlane
	I0923 12:38:36.587487    4228 kubeadm.go:394] duration metric: took 666.6192ms to StartCluster
	I0923 12:38:36.587589    4228 settings.go:142] acquiring lock: {Name:mk9684611c6005d251a6ecf406b4611c2c1e30f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:38:36.587984    4228 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0923 12:38:36.593188    4228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\kubeconfig: {Name:mk7e72b8b9c82f9d87d6aed6af6962a1c1fa489d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:38:36.595475    4228 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 12:38:36.595475    4228 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 12:38:36.595475    4228 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-694600"
	I0923 12:38:36.595475    4228 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-694600"
	I0923 12:38:36.595475    4228 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-694600"
	W0923 12:38:36.595475    4228 addons.go:243] addon storage-provisioner should already be in state true
	I0923 12:38:36.596053    4228 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-694600"
	I0923 12:38:36.596053    4228 addons.go:69] Setting dashboard=true in profile "old-k8s-version-694600"
	I0923 12:38:36.596053    4228 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-694600"
	I0923 12:38:36.596188    4228 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-694600"
	W0923 12:38:36.596188    4228 addons.go:243] addon metrics-server should already be in state true
	I0923 12:38:36.596053    4228 addons.go:234] Setting addon dashboard=true in "old-k8s-version-694600"
	W0923 12:38:36.596188    4228 addons.go:243] addon dashboard should already be in state true
	I0923 12:38:36.596188    4228 host.go:66] Checking if "old-k8s-version-694600" exists ...
	I0923 12:38:36.596188    4228 host.go:66] Checking if "old-k8s-version-694600" exists ...
	I0923 12:38:36.596188    4228 config.go:182] Loaded profile config "old-k8s-version-694600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0923 12:38:36.596188    4228 host.go:66] Checking if "old-k8s-version-694600" exists ...
	I0923 12:38:36.601510    4228 out.go:177] * Verifying Kubernetes components...
	I0923 12:38:36.621794    4228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:38:36.624791    4228 cli_runner.go:164] Run: docker container inspect old-k8s-version-694600 --format={{.State.Status}}
	I0923 12:38:36.625404    4228 cli_runner.go:164] Run: docker container inspect old-k8s-version-694600 --format={{.State.Status}}
	I0923 12:38:36.626454    4228 cli_runner.go:164] Run: docker container inspect old-k8s-version-694600 --format={{.State.Status}}
	I0923 12:38:36.627915    4228 cli_runner.go:164] Run: docker container inspect old-k8s-version-694600 --format={{.State.Status}}
	I0923 12:38:36.743725    4228 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 12:38:36.746685    4228 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:38:36.746741    4228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 12:38:36.746741    4228 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0923 12:38:36.749305    4228 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 12:38:36.749373    4228 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 12:38:36.749725    4228 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0923 12:38:36.752200    4228 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0923 12:38:36.754483    4228 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-694600"
	W0923 12:38:36.754545    4228 addons.go:243] addon default-storageclass should already be in state true
	I0923 12:38:36.754600    4228 host.go:66] Checking if "old-k8s-version-694600" exists ...
	I0923 12:38:36.754781    4228 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0923 12:38:36.754845    4228 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0923 12:38:36.762830    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-694600
	I0923 12:38:36.763484    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-694600
	I0923 12:38:36.769168    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-694600
	I0923 12:38:36.792162    4228 cli_runner.go:164] Run: docker container inspect old-k8s-version-694600 --format={{.State.Status}}
	I0923 12:38:36.876912    4228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60038 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\old-k8s-version-694600\id_rsa Username:docker}
	I0923 12:38:36.877275    4228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60038 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\old-k8s-version-694600\id_rsa Username:docker}
	I0923 12:38:36.884612    4228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60038 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\old-k8s-version-694600\id_rsa Username:docker}
	I0923 12:38:36.896178    4228 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 12:38:36.896178    4228 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 12:38:36.912916    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-694600
	I0923 12:38:37.011722    4228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60038 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\old-k8s-version-694600\id_rsa Username:docker}
	I0923 12:38:37.702224    4228 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 12:38:37.702224    4228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0923 12:38:37.803747    4228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:38:37.883826    4228 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0923 12:38:37.883929    4228 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0923 12:38:37.886421    4228 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.264618s)
	I0923 12:38:37.913725    4228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:38:38.011225    4228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 12:38:38.100962    4228 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 12:38:38.101133    4228 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 12:38:38.187850    4228 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0923 12:38:38.187850    4228 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0923 12:38:38.380266    4228 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 12:38:38.380266    4228 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 12:38:38.496912    4228 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0923 12:38:38.496912    4228 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0923 12:38:38.613669    4228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 12:38:38.792060    4228 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0923 12:38:38.792118    4228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0923 12:38:38.898318    4228 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.0945618s)
	W0923 12:38:38.898809    4228 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:38:38.898922    4228 retry.go:31] will retry after 271.123439ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:38:38.912362    4228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-694600
	W0923 12:38:38.986481    4228 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:38:38.987475    4228 retry.go:31] will retry after 138.078267ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:38:38.996312    4228 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-694600" to be "Ready" ...
	I0923 12:38:39.090401    4228 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0923 12:38:39.090486    4228 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0923 12:38:39.143319    4228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0923 12:38:39.198190    4228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:38:39.289832    4228 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0923 12:38:39.289889    4228 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0923 12:38:39.391750    4228 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:38:39.391750    4228 retry.go:31] will retry after 372.857859ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:38:39.591487    4228 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0923 12:38:39.591586    4228 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0923 12:38:39.782628    4228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 12:38:39.883058    4228 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0923 12:38:39.883058    4228 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0923 12:38:40.183727    4228 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0923 12:38:40.183727    4228 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	W0923 12:38:40.189292    4228 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:38:40.189292    4228 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.0459647s)
	I0923 12:38:40.189292    4228 retry.go:31] will retry after 293.13656ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0923 12:38:40.189839    4228 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:38:40.189900    4228 retry.go:31] will retry after 347.668915ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:38:40.414135    4228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0923 12:38:40.511411    4228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:38:40.557391    4228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0923 12:38:40.880783    4228 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.0973362s)
	W0923 12:38:40.880783    4228 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:38:40.881265    4228 retry.go:31] will retry after 553.689055ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:38:41.449133    4228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 12:38:48.200070    4228 node_ready.go:49] node "old-k8s-version-694600" has status "Ready":"True"
	I0923 12:38:48.200140    4228 node_ready.go:38] duration metric: took 9.2037573s for node "old-k8s-version-694600" to be "Ready" ...
	I0923 12:38:48.200140    4228 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:38:48.399809    4228 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-xs5mb" in "kube-system" namespace to be "Ready" ...
	I0923 12:38:49.086100    4228 pod_ready.go:93] pod "coredns-74ff55c5b-xs5mb" in "kube-system" namespace has status "Ready":"True"
	I0923 12:38:49.086190    4228 pod_ready.go:82] duration metric: took 685.832ms for pod "coredns-74ff55c5b-xs5mb" in "kube-system" namespace to be "Ready" ...
	I0923 12:38:49.086264    4228 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-694600" in "kube-system" namespace to be "Ready" ...
	I0923 12:38:49.381352    4228 pod_ready.go:93] pod "etcd-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"True"
	I0923 12:38:49.381451    4228 pod_ready.go:82] duration metric: took 295.1846ms for pod "etcd-old-k8s-version-694600" in "kube-system" namespace to be "Ready" ...
	I0923 12:38:49.381451    4228 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-694600" in "kube-system" namespace to be "Ready" ...
	I0923 12:38:51.415456    4228 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:38:53.483702    4228 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:38:54.383747    4228 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (13.9694636s)
	I0923 12:38:54.383747    4228 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (13.8262501s)
	I0923 12:38:54.385221    4228 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (13.8737029s)
	I0923 12:38:54.385685    4228 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (12.9361431s)
	I0923 12:38:54.385749    4228 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-694600"
	I0923 12:38:54.387698    4228 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-694600 addons enable metrics-server
	
	I0923 12:38:54.598947    4228 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0923 12:38:54.605369    4228 addons.go:510] duration metric: took 18.0097556s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0923 12:38:55.981033    4228 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:38:57.904491    4228 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"True"
	I0923 12:38:57.904556    4228 pod_ready.go:82] duration metric: took 8.5230394s for pod "kube-apiserver-old-k8s-version-694600" in "kube-system" namespace to be "Ready" ...
	I0923 12:38:57.904617    4228 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace to be "Ready" ...
	I0923 12:38:59.920760    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:01.921844    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:03.922119    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:05.926278    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:08.422949    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:10.425694    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:12.923491    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:14.978247    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:17.424460    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:19.431178    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:21.432148    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:23.936013    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:26.426923    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:28.933629    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:31.423245    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:33.425001    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:35.925259    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:37.929416    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:40.008139    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:42.422489    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:44.422876    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:46.426660    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:48.941528    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:51.428553    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:53.931212    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:56.423916    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:39:58.923480    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:00.928138    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:03.425863    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:05.463224    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:07.940395    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:10.420260    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:12.421485    4228 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:13.921277    4228 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"True"
	I0923 12:40:13.921313    4228 pod_ready.go:82] duration metric: took 1m16.016106s for pod "kube-controller-manager-old-k8s-version-694600" in "kube-system" namespace to be "Ready" ...
	I0923 12:40:13.921313    4228 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nf8m5" in "kube-system" namespace to be "Ready" ...
	I0923 12:40:13.934390    4228 pod_ready.go:93] pod "kube-proxy-nf8m5" in "kube-system" namespace has status "Ready":"True"
	I0923 12:40:13.934390    4228 pod_ready.go:82] duration metric: took 13.077ms for pod "kube-proxy-nf8m5" in "kube-system" namespace to be "Ready" ...
	I0923 12:40:13.934390    4228 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-694600" in "kube-system" namespace to be "Ready" ...
	I0923 12:40:13.946207    4228 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-694600" in "kube-system" namespace has status "Ready":"True"
	I0923 12:40:13.946207    4228 pod_ready.go:82] duration metric: took 11.8176ms for pod "kube-scheduler-old-k8s-version-694600" in "kube-system" namespace to be "Ready" ...
	I0923 12:40:13.946207    4228 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace to be "Ready" ...
	I0923 12:40:15.963580    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:18.462315    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:20.961336    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:22.986849    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:25.465469    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:27.963729    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:29.964788    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:32.462987    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:34.963803    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:37.461930    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:39.463289    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:41.962663    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:44.463824    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:46.963724    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:49.465088    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:51.962577    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:54.466302    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:56.962839    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:40:59.470386    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:01.962958    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:03.964949    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:05.966365    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:08.462036    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:10.963209    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:12.965049    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:15.464357    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:17.965894    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:20.463647    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:22.960584    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:24.983324    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:27.464303    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:29.969654    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:32.476396    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:34.966947    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:36.978533    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:39.486135    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:41.964905    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:44.463727    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:46.470607    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:48.964805    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:50.965939    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:53.465189    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:55.483449    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:41:59.022454    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:01.317033    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:03.464059    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:05.518531    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:07.983729    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:10.464919    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:12.468161    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:14.964097    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:17.085472    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:19.458913    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:21.461199    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:23.462326    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:25.462699    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:27.769477    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:29.813138    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:31.963212    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:33.969407    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:36.464909    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:38.465432    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:40.963371    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:43.469456    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:45.470329    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:47.965671    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:50.464523    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:52.968293    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:55.462578    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:57.467361    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:00.030842    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:02.467661    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:04.964850    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:07.468199    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:09.965707    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:12.469599    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:14.968776    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:16.971510    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:19.464247    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:21.468118    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:23.968652    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:26.466047    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:28.466821    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:30.965049    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:33.466742    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:35.973524    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:38.465393    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:40.467962    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:42.471423    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:44.963668    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:46.967051    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:48.975093    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:51.468183    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:53.475604    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:55.966531    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:43:58.462662    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:44:00.467103    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:44:02.480437    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:44:04.964621    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:44:06.968004    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:44:09.463262    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:44:11.465587    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:44:13.471620    4228 pod_ready.go:103] pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace has status "Ready":"False"
	I0923 12:44:13.948810    4228 pod_ready.go:82] duration metric: took 4m0.0007197s for pod "metrics-server-9975d5f86-vmdbz" in "kube-system" namespace to be "Ready" ...
	E0923 12:44:13.948810    4228 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0923 12:44:13.948810    4228 pod_ready.go:39] duration metric: took 5m25.7460534s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:44:13.948810    4228 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:44:13.970804    4228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 12:44:14.026801    4228 logs.go:276] 2 containers: [bf2bddf93da4 99e36abc5feb]
	I0923 12:44:14.035793    4228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 12:44:14.104915    4228 logs.go:276] 2 containers: [db948e782f56 a7d64ac5d685]
	I0923 12:44:14.116888    4228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 12:44:14.173241    4228 logs.go:276] 2 containers: [ac17c0c4ecff db8367477ac1]
	I0923 12:44:14.190100    4228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 12:44:14.235095    4228 logs.go:276] 2 containers: [6f9ee2379541 585ea5a5976b]
	I0923 12:44:14.242134    4228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 12:44:14.327099    4228 logs.go:276] 2 containers: [ae76dbbad5df 606305ef153c]
	I0923 12:44:14.333261    4228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 12:44:14.393114    4228 logs.go:276] 2 containers: [cbaa7f55c1cf 1382815ffdc3]
	I0923 12:44:14.406101    4228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 12:44:14.455861    4228 logs.go:276] 0 containers: []
	W0923 12:44:14.455861    4228 logs.go:278] No container was found matching "kindnet"
	I0923 12:44:14.472860    4228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0923 12:44:14.521851    4228 logs.go:276] 1 containers: [5eb85a2791fd]
	I0923 12:44:14.530863    4228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 12:44:14.604603    4228 logs.go:276] 2 containers: [4b8130a0a631 9420ceb6a8a9]
	I0923 12:44:14.604603    4228 logs.go:123] Gathering logs for describe nodes ...
	I0923 12:44:14.604603    4228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 12:44:14.872367    4228 logs.go:123] Gathering logs for coredns [db8367477ac1] ...
	I0923 12:44:14.872367    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8367477ac1"
	I0923 12:44:14.937361    4228 logs.go:123] Gathering logs for kube-proxy [606305ef153c] ...
	I0923 12:44:14.937361    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 606305ef153c"
	I0923 12:44:15.001367    4228 logs.go:123] Gathering logs for dmesg ...
	I0923 12:44:15.001367    4228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 12:44:15.033364    4228 logs.go:123] Gathering logs for etcd [a7d64ac5d685] ...
	I0923 12:44:15.033364    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d64ac5d685"
	I0923 12:44:15.119365    4228 logs.go:123] Gathering logs for coredns [ac17c0c4ecff] ...
	I0923 12:44:15.119365    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac17c0c4ecff"
	I0923 12:44:15.179365    4228 logs.go:123] Gathering logs for kube-controller-manager [1382815ffdc3] ...
	I0923 12:44:15.179365    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1382815ffdc3"
	I0923 12:44:15.283384    4228 logs.go:123] Gathering logs for kubernetes-dashboard [5eb85a2791fd] ...
	I0923 12:44:15.284404    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5eb85a2791fd"
	I0923 12:44:15.346360    4228 logs.go:123] Gathering logs for Docker ...
	I0923 12:44:15.346360    4228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 12:44:15.418380    4228 logs.go:123] Gathering logs for container status ...
	I0923 12:44:15.418380    4228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 12:44:15.525356    4228 logs.go:123] Gathering logs for kubelet ...
	I0923 12:44:15.525356    4228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 12:44:15.635365    4228 logs.go:138] Found kubelet problem: Sep 23 12:38:55 old-k8s-version-694600 kubelet[1893]: E0923 12:38:55.485189    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:44:15.636362    4228 logs.go:138] Found kubelet problem: Sep 23 12:38:56 old-k8s-version-694600 kubelet[1893]: E0923 12:38:56.815962    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.636362    4228 logs.go:138] Found kubelet problem: Sep 23 12:38:57 old-k8s-version-694600 kubelet[1893]: E0923 12:38:57.902283    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.639385    4228 logs.go:138] Found kubelet problem: Sep 23 12:39:11 old-k8s-version-694600 kubelet[1893]: E0923 12:39:11.731791    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:44:15.642388    4228 logs.go:138] Found kubelet problem: Sep 23 12:39:17 old-k8s-version-694600 kubelet[1893]: E0923 12:39:17.878257    1893 pod_workers.go:191] Error syncing pod 937b014a-169f-4dc5-ac66-a3eee1bc5138 ("storage-provisioner_kube-system(937b014a-169f-4dc5-ac66-a3eee1bc5138)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(937b014a-169f-4dc5-ac66-a3eee1bc5138)"
	W0923 12:44:15.643358    4228 logs.go:138] Found kubelet problem: Sep 23 12:39:23 old-k8s-version-694600 kubelet[1893]: E0923 12:39:23.681185    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.649381    4228 logs.go:138] Found kubelet problem: Sep 23 12:39:39 old-k8s-version-694600 kubelet[1893]: E0923 12:39:39.525345    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:44:15.653412    4228 logs.go:138] Found kubelet problem: Sep 23 12:39:39 old-k8s-version-694600 kubelet[1893]: E0923 12:39:39.605074    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:44:15.654385    4228 logs.go:138] Found kubelet problem: Sep 23 12:39:39 old-k8s-version-694600 kubelet[1893]: E0923 12:39:39.711237    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.659395    4228 logs.go:138] Found kubelet problem: Sep 23 12:39:53 old-k8s-version-694600 kubelet[1893]: E0923 12:39:53.262469    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:44:15.659395    4228 logs.go:138] Found kubelet problem: Sep 23 12:39:54 old-k8s-version-694600 kubelet[1893]: E0923 12:39:54.679747    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.659395    4228 logs.go:138] Found kubelet problem: Sep 23 12:40:07 old-k8s-version-694600 kubelet[1893]: E0923 12:40:07.675786    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.659395    4228 logs.go:138] Found kubelet problem: Sep 23 12:40:09 old-k8s-version-694600 kubelet[1893]: E0923 12:40:09.676754    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.663380    4228 logs.go:138] Found kubelet problem: Sep 23 12:40:20 old-k8s-version-694600 kubelet[1893]: E0923 12:40:20.733409    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:44:15.666369    4228 logs.go:138] Found kubelet problem: Sep 23 12:40:22 old-k8s-version-694600 kubelet[1893]: E0923 12:40:22.131490    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:44:15.666369    4228 logs.go:138] Found kubelet problem: Sep 23 12:40:34 old-k8s-version-694600 kubelet[1893]: E0923 12:40:34.672437    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.667374    4228 logs.go:138] Found kubelet problem: Sep 23 12:40:37 old-k8s-version-694600 kubelet[1893]: E0923 12:40:37.669640    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.667374    4228 logs.go:138] Found kubelet problem: Sep 23 12:40:46 old-k8s-version-694600 kubelet[1893]: E0923 12:40:46.668789    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.667374    4228 logs.go:138] Found kubelet problem: Sep 23 12:40:50 old-k8s-version-694600 kubelet[1893]: E0923 12:40:50.669482    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.668377    4228 logs.go:138] Found kubelet problem: Sep 23 12:41:00 old-k8s-version-694600 kubelet[1893]: E0923 12:41:00.669910    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.670366    4228 logs.go:138] Found kubelet problem: Sep 23 12:41:03 old-k8s-version-694600 kubelet[1893]: E0923 12:41:03.166885    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:44:15.670366    4228 logs.go:138] Found kubelet problem: Sep 23 12:41:15 old-k8s-version-694600 kubelet[1893]: E0923 12:41:15.677829    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.671380    4228 logs.go:138] Found kubelet problem: Sep 23 12:41:15 old-k8s-version-694600 kubelet[1893]: E0923 12:41:15.677922    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.671380    4228 logs.go:138] Found kubelet problem: Sep 23 12:41:28 old-k8s-version-694600 kubelet[1893]: E0923 12:41:28.698549    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.671380    4228 logs.go:138] Found kubelet problem: Sep 23 12:41:30 old-k8s-version-694600 kubelet[1893]: E0923 12:41:30.669230    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.672456    4228 logs.go:138] Found kubelet problem: Sep 23 12:41:39 old-k8s-version-694600 kubelet[1893]: E0923 12:41:39.664583    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.675419    4228 logs.go:138] Found kubelet problem: Sep 23 12:41:43 old-k8s-version-694600 kubelet[1893]: E0923 12:41:43.768086    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:44:15.676403    4228 logs.go:138] Found kubelet problem: Sep 23 12:41:52 old-k8s-version-694600 kubelet[1893]: E0923 12:41:52.664375    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.676403    4228 logs.go:138] Found kubelet problem: Sep 23 12:41:57 old-k8s-version-694600 kubelet[1893]: E0923 12:41:57.678559    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.677513    4228 logs.go:138] Found kubelet problem: Sep 23 12:42:04 old-k8s-version-694600 kubelet[1893]: E0923 12:42:04.665461    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.677513    4228 logs.go:138] Found kubelet problem: Sep 23 12:42:12 old-k8s-version-694600 kubelet[1893]: E0923 12:42:12.660131    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.678376    4228 logs.go:138] Found kubelet problem: Sep 23 12:42:16 old-k8s-version-694600 kubelet[1893]: E0923 12:42:16.660802    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.678376    4228 logs.go:138] Found kubelet problem: Sep 23 12:42:24 old-k8s-version-694600 kubelet[1893]: E0923 12:42:24.660639    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.682405    4228 logs.go:138] Found kubelet problem: Sep 23 12:42:31 old-k8s-version-694600 kubelet[1893]: E0923 12:42:31.215753    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:44:15.683363    4228 logs.go:138] Found kubelet problem: Sep 23 12:42:39 old-k8s-version-694600 kubelet[1893]: E0923 12:42:39.657460    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.683363    4228 logs.go:138] Found kubelet problem: Sep 23 12:42:45 old-k8s-version-694600 kubelet[1893]: E0923 12:42:45.658547    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.683363    4228 logs.go:138] Found kubelet problem: Sep 23 12:42:51 old-k8s-version-694600 kubelet[1893]: E0923 12:42:51.656478    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.684366    4228 logs.go:138] Found kubelet problem: Sep 23 12:42:58 old-k8s-version-694600 kubelet[1893]: E0923 12:42:58.657078    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.684366    4228 logs.go:138] Found kubelet problem: Sep 23 12:43:06 old-k8s-version-694600 kubelet[1893]: E0923 12:43:06.656952    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.684366    4228 logs.go:138] Found kubelet problem: Sep 23 12:43:09 old-k8s-version-694600 kubelet[1893]: E0923 12:43:09.654057    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.685434    4228 logs.go:138] Found kubelet problem: Sep 23 12:43:17 old-k8s-version-694600 kubelet[1893]: E0923 12:43:17.656711    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.685434    4228 logs.go:138] Found kubelet problem: Sep 23 12:43:22 old-k8s-version-694600 kubelet[1893]: E0923 12:43:22.657075    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.685434    4228 logs.go:138] Found kubelet problem: Sep 23 12:43:31 old-k8s-version-694600 kubelet[1893]: E0923 12:43:31.654949    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.685434    4228 logs.go:138] Found kubelet problem: Sep 23 12:43:34 old-k8s-version-694600 kubelet[1893]: E0923 12:43:34.668314    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.686374    4228 logs.go:138] Found kubelet problem: Sep 23 12:43:46 old-k8s-version-694600 kubelet[1893]: E0923 12:43:46.652277    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.686374    4228 logs.go:138] Found kubelet problem: Sep 23 12:43:48 old-k8s-version-694600 kubelet[1893]: E0923 12:43:48.653714    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.686374    4228 logs.go:138] Found kubelet problem: Sep 23 12:44:00 old-k8s-version-694600 kubelet[1893]: E0923 12:44:00.654685    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.687380    4228 logs.go:138] Found kubelet problem: Sep 23 12:44:02 old-k8s-version-694600 kubelet[1893]: E0923 12:44:02.652220    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:15.687380    4228 logs.go:138] Found kubelet problem: Sep 23 12:44:13 old-k8s-version-694600 kubelet[1893]: E0923 12:44:13.649211    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0923 12:44:15.687380    4228 logs.go:123] Gathering logs for etcd [db948e782f56] ...
	I0923 12:44:15.687380    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db948e782f56"
	I0923 12:44:15.758402    4228 logs.go:123] Gathering logs for kube-scheduler [6f9ee2379541] ...
	I0923 12:44:15.758402    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f9ee2379541"
	I0923 12:44:15.823391    4228 logs.go:123] Gathering logs for kube-proxy [ae76dbbad5df] ...
	I0923 12:44:15.823391    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae76dbbad5df"
	I0923 12:44:15.888377    4228 logs.go:123] Gathering logs for kube-controller-manager [cbaa7f55c1cf] ...
	I0923 12:44:15.888377    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbaa7f55c1cf"
	I0923 12:44:15.969372    4228 logs.go:123] Gathering logs for storage-provisioner [4b8130a0a631] ...
	I0923 12:44:15.969372    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8130a0a631"
	I0923 12:44:16.031365    4228 logs.go:123] Gathering logs for storage-provisioner [9420ceb6a8a9] ...
	I0923 12:44:16.031365    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9420ceb6a8a9"
	I0923 12:44:16.099417    4228 logs.go:123] Gathering logs for kube-apiserver [bf2bddf93da4] ...
	I0923 12:44:16.099417    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf2bddf93da4"
	I0923 12:44:16.190773    4228 logs.go:123] Gathering logs for kube-apiserver [99e36abc5feb] ...
	I0923 12:44:16.190773    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99e36abc5feb"
	I0923 12:44:16.293784    4228 logs.go:123] Gathering logs for kube-scheduler [585ea5a5976b] ...
	I0923 12:44:16.293784    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ea5a5976b"
	I0923 12:44:16.351792    4228 out.go:358] Setting ErrFile to fd 1892...
	I0923 12:44:16.351792    4228 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 12:44:16.351792    4228 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0923 12:44:16.351792    4228 out.go:270]   Sep 23 12:43:46 old-k8s-version-694600 kubelet[1893]: E0923 12:43:46.652277    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 23 12:43:46 old-k8s-version-694600 kubelet[1893]: E0923 12:43:46.652277    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:16.351792    4228 out.go:270]   Sep 23 12:43:48 old-k8s-version-694600 kubelet[1893]: E0923 12:43:48.653714    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Sep 23 12:43:48 old-k8s-version-694600 kubelet[1893]: E0923 12:43:48.653714    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:16.351792    4228 out.go:270]   Sep 23 12:44:00 old-k8s-version-694600 kubelet[1893]: E0923 12:44:00.654685    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 23 12:44:00 old-k8s-version-694600 kubelet[1893]: E0923 12:44:00.654685    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:16.351792    4228 out.go:270]   Sep 23 12:44:02 old-k8s-version-694600 kubelet[1893]: E0923 12:44:02.652220    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Sep 23 12:44:02 old-k8s-version-694600 kubelet[1893]: E0923 12:44:02.652220    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:16.351792    4228 out.go:270]   Sep 23 12:44:13 old-k8s-version-694600 kubelet[1893]: E0923 12:44:13.649211    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 23 12:44:13 old-k8s-version-694600 kubelet[1893]: E0923 12:44:13.649211    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0923 12:44:16.351792    4228 out.go:358] Setting ErrFile to fd 1892...
	I0923 12:44:16.351792    4228 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:44:26.366782    4228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:44:26.396825    4228 api_server.go:72] duration metric: took 5m49.7986149s to wait for apiserver process to appear ...
	I0923 12:44:26.396825    4228 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:44:26.405814    4228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 12:44:26.461700    4228 logs.go:276] 2 containers: [bf2bddf93da4 99e36abc5feb]
	I0923 12:44:26.473719    4228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 12:44:26.527587    4228 logs.go:276] 2 containers: [db948e782f56 a7d64ac5d685]
	I0923 12:44:26.535572    4228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 12:44:26.583105    4228 logs.go:276] 2 containers: [ac17c0c4ecff db8367477ac1]
	I0923 12:44:26.596122    4228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 12:44:26.646802    4228 logs.go:276] 2 containers: [6f9ee2379541 585ea5a5976b]
	I0923 12:44:26.658798    4228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 12:44:26.710800    4228 logs.go:276] 2 containers: [ae76dbbad5df 606305ef153c]
	I0923 12:44:26.723792    4228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 12:44:26.774423    4228 logs.go:276] 2 containers: [cbaa7f55c1cf 1382815ffdc3]
	I0923 12:44:26.785981    4228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 12:44:26.833985    4228 logs.go:276] 0 containers: []
	W0923 12:44:26.833985    4228 logs.go:278] No container was found matching "kindnet"
	I0923 12:44:26.851524    4228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 12:44:26.903398    4228 logs.go:276] 2 containers: [4b8130a0a631 9420ceb6a8a9]
	I0923 12:44:26.912397    4228 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0923 12:44:26.963406    4228 logs.go:276] 1 containers: [5eb85a2791fd]
	I0923 12:44:26.963406    4228 logs.go:123] Gathering logs for kube-apiserver [99e36abc5feb] ...
	I0923 12:44:26.963406    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99e36abc5feb"
	I0923 12:44:27.072432    4228 logs.go:123] Gathering logs for container status ...
	I0923 12:44:27.072432    4228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 12:44:27.189774    4228 logs.go:123] Gathering logs for describe nodes ...
	I0923 12:44:27.189774    4228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 12:44:27.823897    4228 logs.go:123] Gathering logs for etcd [a7d64ac5d685] ...
	I0923 12:44:27.823897    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d64ac5d685"
	I0923 12:44:27.890035    4228 logs.go:123] Gathering logs for kube-scheduler [585ea5a5976b] ...
	I0923 12:44:27.890035    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ea5a5976b"
	I0923 12:44:27.947602    4228 logs.go:123] Gathering logs for kube-proxy [606305ef153c] ...
	I0923 12:44:27.947602    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 606305ef153c"
	I0923 12:44:28.012651    4228 logs.go:123] Gathering logs for storage-provisioner [4b8130a0a631] ...
	I0923 12:44:28.012651    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b8130a0a631"
	I0923 12:44:28.105743    4228 logs.go:123] Gathering logs for storage-provisioner [9420ceb6a8a9] ...
	I0923 12:44:28.105743    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9420ceb6a8a9"
	I0923 12:44:28.170383    4228 logs.go:123] Gathering logs for kubernetes-dashboard [5eb85a2791fd] ...
	I0923 12:44:28.170383    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5eb85a2791fd"
	I0923 12:44:28.235386    4228 logs.go:123] Gathering logs for Docker ...
	I0923 12:44:28.235386    4228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 12:44:28.780490    4228 logs.go:123] Gathering logs for etcd [db948e782f56] ...
	I0923 12:44:28.780490    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db948e782f56"
	I0923 12:44:28.863631    4228 logs.go:123] Gathering logs for kube-apiserver [bf2bddf93da4] ...
	I0923 12:44:28.863631    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf2bddf93da4"
	I0923 12:44:28.995447    4228 logs.go:123] Gathering logs for kube-proxy [ae76dbbad5df] ...
	I0923 12:44:28.995486    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae76dbbad5df"
	I0923 12:44:29.067116    4228 logs.go:123] Gathering logs for kube-controller-manager [cbaa7f55c1cf] ...
	I0923 12:44:29.067116    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbaa7f55c1cf"
	I0923 12:44:29.160099    4228 logs.go:123] Gathering logs for kube-controller-manager [1382815ffdc3] ...
	I0923 12:44:29.160099    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1382815ffdc3"
	I0923 12:44:29.255092    4228 logs.go:123] Gathering logs for kubelet ...
	I0923 12:44:29.255092    4228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 12:44:29.389521    4228 logs.go:138] Found kubelet problem: Sep 23 12:38:55 old-k8s-version-694600 kubelet[1893]: E0923 12:38:55.485189    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:44:29.390517    4228 logs.go:138] Found kubelet problem: Sep 23 12:38:56 old-k8s-version-694600 kubelet[1893]: E0923 12:38:56.815962    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.390517    4228 logs.go:138] Found kubelet problem: Sep 23 12:38:57 old-k8s-version-694600 kubelet[1893]: E0923 12:38:57.902283    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.393519    4228 logs.go:138] Found kubelet problem: Sep 23 12:39:11 old-k8s-version-694600 kubelet[1893]: E0923 12:39:11.731791    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:44:29.401538    4228 logs.go:138] Found kubelet problem: Sep 23 12:39:17 old-k8s-version-694600 kubelet[1893]: E0923 12:39:17.878257    1893 pod_workers.go:191] Error syncing pod 937b014a-169f-4dc5-ac66-a3eee1bc5138 ("storage-provisioner_kube-system(937b014a-169f-4dc5-ac66-a3eee1bc5138)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(937b014a-169f-4dc5-ac66-a3eee1bc5138)"
	W0923 12:44:29.401538    4228 logs.go:138] Found kubelet problem: Sep 23 12:39:23 old-k8s-version-694600 kubelet[1893]: E0923 12:39:23.681185    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.406540    4228 logs.go:138] Found kubelet problem: Sep 23 12:39:39 old-k8s-version-694600 kubelet[1893]: E0923 12:39:39.525345    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:44:29.410518    4228 logs.go:138] Found kubelet problem: Sep 23 12:39:39 old-k8s-version-694600 kubelet[1893]: E0923 12:39:39.605074    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:44:29.410518    4228 logs.go:138] Found kubelet problem: Sep 23 12:39:39 old-k8s-version-694600 kubelet[1893]: E0923 12:39:39.711237    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.414526    4228 logs.go:138] Found kubelet problem: Sep 23 12:39:53 old-k8s-version-694600 kubelet[1893]: E0923 12:39:53.262469    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:44:29.415684    4228 logs.go:138] Found kubelet problem: Sep 23 12:39:54 old-k8s-version-694600 kubelet[1893]: E0923 12:39:54.679747    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.416143    4228 logs.go:138] Found kubelet problem: Sep 23 12:40:07 old-k8s-version-694600 kubelet[1893]: E0923 12:40:07.675786    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.416571    4228 logs.go:138] Found kubelet problem: Sep 23 12:40:09 old-k8s-version-694600 kubelet[1893]: E0923 12:40:09.676754    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.420525    4228 logs.go:138] Found kubelet problem: Sep 23 12:40:20 old-k8s-version-694600 kubelet[1893]: E0923 12:40:20.733409    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:44:29.423836    4228 logs.go:138] Found kubelet problem: Sep 23 12:40:22 old-k8s-version-694600 kubelet[1893]: E0923 12:40:22.131490    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:44:29.423836    4228 logs.go:138] Found kubelet problem: Sep 23 12:40:34 old-k8s-version-694600 kubelet[1893]: E0923 12:40:34.672437    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.424518    4228 logs.go:138] Found kubelet problem: Sep 23 12:40:37 old-k8s-version-694600 kubelet[1893]: E0923 12:40:37.669640    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.424846    4228 logs.go:138] Found kubelet problem: Sep 23 12:40:46 old-k8s-version-694600 kubelet[1893]: E0923 12:40:46.668789    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.425245    4228 logs.go:138] Found kubelet problem: Sep 23 12:40:50 old-k8s-version-694600 kubelet[1893]: E0923 12:40:50.669482    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.425580    4228 logs.go:138] Found kubelet problem: Sep 23 12:41:00 old-k8s-version-694600 kubelet[1893]: E0923 12:41:00.669910    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.428472    4228 logs.go:138] Found kubelet problem: Sep 23 12:41:03 old-k8s-version-694600 kubelet[1893]: E0923 12:41:03.166885    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:44:29.428929    4228 logs.go:138] Found kubelet problem: Sep 23 12:41:15 old-k8s-version-694600 kubelet[1893]: E0923 12:41:15.677829    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.429375    4228 logs.go:138] Found kubelet problem: Sep 23 12:41:15 old-k8s-version-694600 kubelet[1893]: E0923 12:41:15.677922    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.429639    4228 logs.go:138] Found kubelet problem: Sep 23 12:41:28 old-k8s-version-694600 kubelet[1893]: E0923 12:41:28.698549    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.429944    4228 logs.go:138] Found kubelet problem: Sep 23 12:41:30 old-k8s-version-694600 kubelet[1893]: E0923 12:41:30.669230    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.430272    4228 logs.go:138] Found kubelet problem: Sep 23 12:41:39 old-k8s-version-694600 kubelet[1893]: E0923 12:41:39.664583    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.432386    4228 logs.go:138] Found kubelet problem: Sep 23 12:41:43 old-k8s-version-694600 kubelet[1893]: E0923 12:41:43.768086    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:44:29.433002    4228 logs.go:138] Found kubelet problem: Sep 23 12:41:52 old-k8s-version-694600 kubelet[1893]: E0923 12:41:52.664375    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.433161    4228 logs.go:138] Found kubelet problem: Sep 23 12:41:57 old-k8s-version-694600 kubelet[1893]: E0923 12:41:57.678559    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.433161    4228 logs.go:138] Found kubelet problem: Sep 23 12:42:04 old-k8s-version-694600 kubelet[1893]: E0923 12:42:04.665461    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.433161    4228 logs.go:138] Found kubelet problem: Sep 23 12:42:12 old-k8s-version-694600 kubelet[1893]: E0923 12:42:12.660131    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.433991    4228 logs.go:138] Found kubelet problem: Sep 23 12:42:16 old-k8s-version-694600 kubelet[1893]: E0923 12:42:16.660802    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.433991    4228 logs.go:138] Found kubelet problem: Sep 23 12:42:24 old-k8s-version-694600 kubelet[1893]: E0923 12:42:24.660639    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.435984    4228 logs.go:138] Found kubelet problem: Sep 23 12:42:31 old-k8s-version-694600 kubelet[1893]: E0923 12:42:31.215753    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:44:29.435984    4228 logs.go:138] Found kubelet problem: Sep 23 12:42:39 old-k8s-version-694600 kubelet[1893]: E0923 12:42:39.657460    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.436983    4228 logs.go:138] Found kubelet problem: Sep 23 12:42:45 old-k8s-version-694600 kubelet[1893]: E0923 12:42:45.658547    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.436983    4228 logs.go:138] Found kubelet problem: Sep 23 12:42:51 old-k8s-version-694600 kubelet[1893]: E0923 12:42:51.656478    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.436983    4228 logs.go:138] Found kubelet problem: Sep 23 12:42:58 old-k8s-version-694600 kubelet[1893]: E0923 12:42:58.657078    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.436983    4228 logs.go:138] Found kubelet problem: Sep 23 12:43:06 old-k8s-version-694600 kubelet[1893]: E0923 12:43:06.656952    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.437975    4228 logs.go:138] Found kubelet problem: Sep 23 12:43:09 old-k8s-version-694600 kubelet[1893]: E0923 12:43:09.654057    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.437975    4228 logs.go:138] Found kubelet problem: Sep 23 12:43:17 old-k8s-version-694600 kubelet[1893]: E0923 12:43:17.656711    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.437975    4228 logs.go:138] Found kubelet problem: Sep 23 12:43:22 old-k8s-version-694600 kubelet[1893]: E0923 12:43:22.657075    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.437975    4228 logs.go:138] Found kubelet problem: Sep 23 12:43:31 old-k8s-version-694600 kubelet[1893]: E0923 12:43:31.654949    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.438981    4228 logs.go:138] Found kubelet problem: Sep 23 12:43:34 old-k8s-version-694600 kubelet[1893]: E0923 12:43:34.668314    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.438981    4228 logs.go:138] Found kubelet problem: Sep 23 12:43:46 old-k8s-version-694600 kubelet[1893]: E0923 12:43:46.652277    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.438981    4228 logs.go:138] Found kubelet problem: Sep 23 12:43:48 old-k8s-version-694600 kubelet[1893]: E0923 12:43:48.653714    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.438981    4228 logs.go:138] Found kubelet problem: Sep 23 12:44:00 old-k8s-version-694600 kubelet[1893]: E0923 12:44:00.654685    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.439974    4228 logs.go:138] Found kubelet problem: Sep 23 12:44:02 old-k8s-version-694600 kubelet[1893]: E0923 12:44:02.652220    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.439974    4228 logs.go:138] Found kubelet problem: Sep 23 12:44:13 old-k8s-version-694600 kubelet[1893]: E0923 12:44:13.649211    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.439974    4228 logs.go:138] Found kubelet problem: Sep 23 12:44:15 old-k8s-version-694600 kubelet[1893]: E0923 12:44:15.648344    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.442976    4228 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-694600 kubelet[1893]: E0923 12:44:28.021315    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	I0923 12:44:29.442976    4228 logs.go:123] Gathering logs for coredns [ac17c0c4ecff] ...
	I0923 12:44:29.442976    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac17c0c4ecff"
	I0923 12:44:29.499985    4228 logs.go:123] Gathering logs for coredns [db8367477ac1] ...
	I0923 12:44:29.499985    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db8367477ac1"
	I0923 12:44:29.566261    4228 logs.go:123] Gathering logs for kube-scheduler [6f9ee2379541] ...
	I0923 12:44:29.566261    4228 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f9ee2379541"
	I0923 12:44:29.684329    4228 logs.go:123] Gathering logs for dmesg ...
	I0923 12:44:29.685336    4228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 12:44:29.716302    4228 out.go:358] Setting ErrFile to fd 1892...
	I0923 12:44:29.716302    4228 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 12:44:29.716302    4228 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0923 12:44:29.716302    4228 out.go:270]   Sep 23 12:44:00 old-k8s-version-694600 kubelet[1893]: E0923 12:44:00.654685    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 23 12:44:00 old-k8s-version-694600 kubelet[1893]: E0923 12:44:00.654685    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.716302    4228 out.go:270]   Sep 23 12:44:02 old-k8s-version-694600 kubelet[1893]: E0923 12:44:02.652220    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Sep 23 12:44:02 old-k8s-version-694600 kubelet[1893]: E0923 12:44:02.652220    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.716302    4228 out.go:270]   Sep 23 12:44:13 old-k8s-version-694600 kubelet[1893]: E0923 12:44:13.649211    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 23 12:44:13 old-k8s-version-694600 kubelet[1893]: E0923 12:44:13.649211    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.716302    4228 out.go:270]   Sep 23 12:44:15 old-k8s-version-694600 kubelet[1893]: E0923 12:44:15.648344    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Sep 23 12:44:15 old-k8s-version-694600 kubelet[1893]: E0923 12:44:15.648344    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:44:29.716302    4228 out.go:270]   Sep 23 12:44:28 old-k8s-version-694600 kubelet[1893]: E0923 12:44:28.021315    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	  Sep 23 12:44:28 old-k8s-version-694600 kubelet[1893]: E0923 12:44:28.021315    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	I0923 12:44:29.716302    4228 out.go:358] Setting ErrFile to fd 1892...
	I0923 12:44:29.716302    4228 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:44:39.717477    4228 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60042/healthz ...
	I0923 12:44:39.735476    4228 api_server.go:279] https://127.0.0.1:60042/healthz returned 200:
	ok
	I0923 12:44:39.739491    4228 out.go:201] 
	W0923 12:44:39.743472    4228 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0923 12:44:39.743472    4228 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0923 12:44:39.743472    4228 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0923 12:44:39.743472    4228 out.go:270] * 
	* 
	W0923 12:44:39.745491    4228 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 12:44:39.750477    4228 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p old-k8s-version-694600 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-694600
helpers_test.go:235: (dbg) docker inspect old-k8s-version-694600:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "441bc2f7b5adfb2cfb4dc936c5a158e90a684418ff594a104d03ebca9fdc4c1a",
	        "Created": "2024-09-23T12:33:49.012189383Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 303759,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T12:37:52.038740944Z",
	            "FinishedAt": "2024-09-23T12:37:48.135536288Z"
	        },
	        "Image": "sha256:d94335c0cd164ddebb3c5158e317bcf6d2e08dc08f448d25251f425acb842829",
	        "ResolvConfPath": "/var/lib/docker/containers/441bc2f7b5adfb2cfb4dc936c5a158e90a684418ff594a104d03ebca9fdc4c1a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/441bc2f7b5adfb2cfb4dc936c5a158e90a684418ff594a104d03ebca9fdc4c1a/hostname",
	        "HostsPath": "/var/lib/docker/containers/441bc2f7b5adfb2cfb4dc936c5a158e90a684418ff594a104d03ebca9fdc4c1a/hosts",
	        "LogPath": "/var/lib/docker/containers/441bc2f7b5adfb2cfb4dc936c5a158e90a684418ff594a104d03ebca9fdc4c1a/441bc2f7b5adfb2cfb4dc936c5a158e90a684418ff594a104d03ebca9fdc4c1a-json.log",
	        "Name": "/old-k8s-version-694600",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-694600:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-694600",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3d1bb798fb4d7d4ab89e7033a82caba81dca0ab3456a03538588d0784a8ea824-init/diff:/var/lib/docker/overlay2/c7287d3444125b9a8090b921db98cb6ed8be2d7a048d39cf2a791cb2793d7251/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3d1bb798fb4d7d4ab89e7033a82caba81dca0ab3456a03538588d0784a8ea824/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3d1bb798fb4d7d4ab89e7033a82caba81dca0ab3456a03538588d0784a8ea824/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3d1bb798fb4d7d4ab89e7033a82caba81dca0ab3456a03538588d0784a8ea824/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-694600",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-694600/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-694600",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-694600",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-694600",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "85cd1bcfb9cdc758d88b41d131fef70a8ca59d5e8b8d5b89a82577316b86a82e",
	            "SandboxKey": "/var/run/docker/netns/85cd1bcfb9cd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60038"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60039"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60040"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60041"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60042"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-694600": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "15fe9a4473166025986c923d51f44c965b073feca0dd2650fd129e1c57b5f812",
	                    "EndpointID": "c55be3cc11eaf7b3e39285fe3383483e186e0dafef4892fbe0ca13c52017c614",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-694600",
	                        "441bc2f7b5ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-694600 -n old-k8s-version-694600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-694600 -n old-k8s-version-694600: (1.2240958s)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-694600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p old-k8s-version-694600 logs -n 25: (2.9040335s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p auto-579000 sudo systemctl                        | auto-579000           | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC | 23 Sep 24 12:44 UTC |
	|         | cat docker --no-pager                                |                       |                   |         |                     |                     |
	| ssh     | -p auto-579000 sudo cat                              | auto-579000           | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC | 23 Sep 24 12:44 UTC |
	|         | /etc/docker/daemon.json                              |                       |                   |         |                     |                     |
	| ssh     | -p auto-579000 sudo docker                           | auto-579000           | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC | 23 Sep 24 12:44 UTC |
	|         | system info                                          |                       |                   |         |                     |                     |
	| ssh     | -p auto-579000 sudo systemctl                        | auto-579000           | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC | 23 Sep 24 12:44 UTC |
	|         | status cri-docker --all --full                       |                       |                   |         |                     |                     |
	|         | --no-pager                                           |                       |                   |         |                     |                     |
	| ssh     | -p auto-579000 sudo systemctl                        | auto-579000           | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC | 23 Sep 24 12:44 UTC |
	|         | cat cri-docker --no-pager                            |                       |                   |         |                     |                     |
	| ssh     | -p auto-579000 sudo cat                              | auto-579000           | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC | 23 Sep 24 12:44 UTC |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |                   |         |                     |                     |
	| ssh     | -p auto-579000 sudo cat                              | auto-579000           | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC | 23 Sep 24 12:44 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |                   |         |                     |                     |
	| ssh     | -p auto-579000 sudo                                  | auto-579000           | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC | 23 Sep 24 12:44 UTC |
	|         | cri-dockerd --version                                |                       |                   |         |                     |                     |
	| ssh     | -p auto-579000 sudo systemctl                        | auto-579000           | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC | 23 Sep 24 12:44 UTC |
	|         | status containerd --all --full                       |                       |                   |         |                     |                     |
	|         | --no-pager                                           |                       |                   |         |                     |                     |
	| image   | embed-certs-648500 image list                        | embed-certs-648500    | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC | 23 Sep 24 12:44 UTC |
	|         | --format=json                                        |                       |                   |         |                     |                     |
	| ssh     | -p auto-579000 sudo systemctl                        | auto-579000           | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC | 23 Sep 24 12:44 UTC |
	|         | cat containerd --no-pager                            |                       |                   |         |                     |                     |
	| pause   | -p embed-certs-648500                                | embed-certs-648500    | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC | 23 Sep 24 12:44 UTC |
	|         | --alsologtostderr -v=1                               |                       |                   |         |                     |                     |
	| ssh     | -p auto-579000 sudo cat                              | auto-579000           | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC | 23 Sep 24 12:44 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |                   |         |                     |                     |
	| ssh     | -p auto-579000 sudo cat                              | auto-579000           | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC | 23 Sep 24 12:44 UTC |
	|         | /etc/containerd/config.toml                          |                       |                   |         |                     |                     |
	| ssh     | -p auto-579000 sudo containerd                       | auto-579000           | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC | 23 Sep 24 12:44 UTC |
	|         | config dump                                          |                       |                   |         |                     |                     |
	| ssh     | -p auto-579000 sudo systemctl                        | auto-579000           | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC |                     |
	|         | status crio --all --full                             |                       |                   |         |                     |                     |
	|         | --no-pager                                           |                       |                   |         |                     |                     |
	| unpause | -p embed-certs-648500                                | embed-certs-648500    | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC | 23 Sep 24 12:44 UTC |
	|         | --alsologtostderr -v=1                               |                       |                   |         |                     |                     |
	| ssh     | -p auto-579000 sudo systemctl                        | auto-579000           | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC | 23 Sep 24 12:44 UTC |
	|         | cat crio --no-pager                                  |                       |                   |         |                     |                     |
	| ssh     | -p auto-579000 sudo find                             | auto-579000           | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC | 23 Sep 24 12:44 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |                   |         |                     |                     |
	| ssh     | -p auto-579000 sudo crio                             | auto-579000           | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC | 23 Sep 24 12:44 UTC |
	|         | config                                               |                       |                   |         |                     |                     |
	| delete  | -p auto-579000                                       | auto-579000           | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC | 23 Sep 24 12:44 UTC |
	| delete  | -p embed-certs-648500                                | embed-certs-648500    | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC | 23 Sep 24 12:44 UTC |
	| start   | -p custom-flannel-579000                             | custom-flannel-579000 | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |                   |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |                   |         |                     |                     |
	|         | --cni=testdata\kube-flannel.yaml                     |                       |                   |         |                     |                     |
	|         | --driver=docker                                      |                       |                   |         |                     |                     |
	| delete  | -p embed-certs-648500                                | embed-certs-648500    | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC | 23 Sep 24 12:44 UTC |
	| start   | -p false-579000 --memory=3072                        | false-579000          | minikube2\jenkins | v1.34.0 | 23 Sep 24 12:44 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                       |                   |         |                     |                     |
	|         | --wait-timeout=15m --cni=false                       |                       |                   |         |                     |                     |
	|         | --driver=docker                                      |                       |                   |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 12:44:31
	Running on machine: minikube2
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 12:44:31.046186    6636 out.go:345] Setting OutFile to fd 2028 ...
	I0923 12:44:31.124632    6636 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:44:31.124632    6636 out.go:358] Setting ErrFile to fd 1708...
	I0923 12:44:31.124632    6636 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:44:31.149500    6636 out.go:352] Setting JSON to false
	I0923 12:44:31.152592    6636 start.go:129] hostinfo: {"hostname":"minikube2","uptime":6338,"bootTime":1727089132,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0923 12:44:31.152592    6636 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 12:44:27.167258    5956 pod_ready.go:103] pod "calico-kube-controllers-b8d8894fb-kf86t" in "kube-system" namespace has status "Ready":"False"
	I0923 12:44:29.626316    5956 pod_ready.go:103] pod "calico-kube-controllers-b8d8894fb-kf86t" in "kube-system" namespace has status "Ready":"False"
	I0923 12:44:31.243482    6636 out.go:177] * [false-579000] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 12:44:31.247862    6636 notify.go:220] Checking for updates...
	I0923 12:44:31.252446    6636 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0923 12:44:31.260488    6636 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:44:31.267122    6636 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0923 12:44:31.274857    6636 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 12:44:31.282498    6636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:44:31.290643    6636 config.go:182] Loaded profile config "calico-579000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:44:31.291215    6636 config.go:182] Loaded profile config "custom-flannel-579000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:44:31.291930    6636 config.go:182] Loaded profile config "old-k8s-version-694600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0923 12:44:31.292258    6636 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:44:31.480645    6636 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0923 12:44:31.489652    6636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 12:44:31.879034    6636 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:true NGoroutines:89 SystemTime:2024-09-23 12:44:31.840560079 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 12:44:31.883032    6636 out.go:177] * Using the docker driver based on user configuration
	I0923 12:44:31.891030    6636 start.go:297] selected driver: docker
	I0923 12:44:31.891030    6636 start.go:901] validating driver "docker" against <nil>
	I0923 12:44:31.891030    6636 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:44:31.966631    6636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 12:44:32.344204    6636 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:80 OomKillDisable:true NGoroutines:89 SystemTime:2024-09-23 12:44:32.309756877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 12:44:32.345207    6636 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 12:44:32.346377    6636 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:44:32.346377    6636 out.go:177] * Using Docker Desktop driver with root privileges
	I0923 12:44:32.353249    6636 cni.go:84] Creating CNI manager for "false"
	I0923 12:44:32.353249    6636 start.go:340] cluster config:
	{Name:false-579000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-579000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:44:32.358224    6636 out.go:177] * Starting "false-579000" primary control-plane node in "false-579000" cluster
	I0923 12:44:32.365205    6636 cache.go:121] Beginning downloading kic base image for docker with docker
	I0923 12:44:32.374244    6636 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 12:44:32.380204    6636 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:44:32.380204    6636 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 12:44:32.380204    6636 preload.go:146] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 12:44:32.380204    6636 cache.go:56] Caching tarball of preloaded images
	I0923 12:44:32.381210    6636 preload.go:172] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 12:44:32.381210    6636 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 12:44:32.381210    6636 profile.go:143] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\false-579000\config.json ...
	I0923 12:44:32.381210    6636 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\false-579000\config.json: {Name:mk009d7ac1de780cd68ca9916b7a5c638ffaa261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:44:32.498347    6636 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon, skipping pull
	I0923 12:44:32.498347    6636 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in daemon, skipping load
	I0923 12:44:32.498347    6636 cache.go:194] Successfully downloaded all kic artifacts
	I0923 12:44:32.498347    6636 start.go:360] acquireMachinesLock for false-579000: {Name:mk545e73c94d9a16da04c221b47ccd312103589d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:44:32.498347    6636 start.go:364] duration metric: took 0s to acquireMachinesLock for "false-579000"
	I0923 12:44:32.498347    6636 start.go:93] Provisioning new machine with config: &{Name:false-579000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-579000 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 12:44:32.498347    6636 start.go:125] createHost starting for "" (driver="docker")
	I0923 12:44:30.035833    2288 out.go:235] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0923 12:44:30.035833    2288 start.go:159] libmachine.API.Create for "custom-flannel-579000" (driver="docker")
	I0923 12:44:30.035833    2288 client.go:168] LocalClient.Create starting
	I0923 12:44:30.037421    2288 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0923 12:44:30.037421    2288 main.go:141] libmachine: Decoding PEM data...
	I0923 12:44:30.037421    2288 main.go:141] libmachine: Parsing certificate...
	I0923 12:44:30.037998    2288 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0923 12:44:30.038206    2288 main.go:141] libmachine: Decoding PEM data...
	I0923 12:44:30.038206    2288 main.go:141] libmachine: Parsing certificate...
	I0923 12:44:30.050926    2288 cli_runner.go:164] Run: docker network inspect custom-flannel-579000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0923 12:44:30.132908    2288 cli_runner.go:211] docker network inspect custom-flannel-579000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0923 12:44:30.140904    2288 network_create.go:284] running [docker network inspect custom-flannel-579000] to gather additional debugging logs...
	I0923 12:44:30.140904    2288 cli_runner.go:164] Run: docker network inspect custom-flannel-579000
	W0923 12:44:30.217916    2288 cli_runner.go:211] docker network inspect custom-flannel-579000 returned with exit code 1
	I0923 12:44:30.217916    2288 network_create.go:287] error running [docker network inspect custom-flannel-579000]: docker network inspect custom-flannel-579000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-579000 not found
	I0923 12:44:30.217916    2288 network_create.go:289] output of [docker network inspect custom-flannel-579000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-579000 not found
	
	** /stderr **
	I0923 12:44:30.225912    2288 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 12:44:30.318911    2288 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0923 12:44:30.349359    2288 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0923 12:44:30.371957    2288 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001636fc0}
	I0923 12:44:30.372077    2288 network_create.go:124] attempt to create docker network custom-flannel-579000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0923 12:44:30.379715    2288 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-579000 custom-flannel-579000
	W0923 12:44:30.471641    2288 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-579000 custom-flannel-579000 returned with exit code 1
	W0923 12:44:30.471641    2288 network_create.go:149] failed to create docker network custom-flannel-579000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-579000 custom-flannel-579000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W0923 12:44:30.471641    2288 network_create.go:116] failed to create docker network custom-flannel-579000 192.168.67.0/24, will retry: subnet is taken
	I0923 12:44:30.505787    2288 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0923 12:44:30.525789    2288 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016376b0}
	I0923 12:44:30.525789    2288 network_create.go:124] attempt to create docker network custom-flannel-579000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0923 12:44:30.532826    2288 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-579000 custom-flannel-579000
	I0923 12:44:31.487663    2288 network_create.go:108] docker network custom-flannel-579000 192.168.76.0/24 created
	I0923 12:44:31.487663    2288 kic.go:121] calculated static IP "192.168.76.2" for the "custom-flannel-579000" container
	I0923 12:44:31.504652    2288 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0923 12:44:31.603662    2288 cli_runner.go:164] Run: docker volume create custom-flannel-579000 --label name.minikube.sigs.k8s.io=custom-flannel-579000 --label created_by.minikube.sigs.k8s.io=true
	I0923 12:44:31.706796    2288 oci.go:103] Successfully created a docker volume custom-flannel-579000
	I0923 12:44:31.719800    2288 cli_runner.go:164] Run: docker run --rm --name custom-flannel-579000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-579000 --entrypoint /usr/bin/test -v custom-flannel-579000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0923 12:44:33.490099    2288 cli_runner.go:217] Completed: docker run --rm --name custom-flannel-579000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-579000 --entrypoint /usr/bin/test -v custom-flannel-579000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (1.7702849s)
	I0923 12:44:33.490099    2288 oci.go:107] Successfully prepared a docker volume custom-flannel-579000
	I0923 12:44:33.490099    2288 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:44:33.490099    2288 kic.go:194] Starting extracting preloaded images to volume ...
	I0923 12:44:33.499079    2288 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-579000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0923 12:44:32.505346    6636 out.go:235] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0923 12:44:32.505346    6636 start.go:159] libmachine.API.Create for "false-579000" (driver="docker")
	I0923 12:44:32.505346    6636 client.go:168] LocalClient.Create starting
	I0923 12:44:32.506366    6636 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0923 12:44:32.506366    6636 main.go:141] libmachine: Decoding PEM data...
	I0923 12:44:32.506366    6636 main.go:141] libmachine: Parsing certificate...
	I0923 12:44:32.506366    6636 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0923 12:44:32.507371    6636 main.go:141] libmachine: Decoding PEM data...
	I0923 12:44:32.507371    6636 main.go:141] libmachine: Parsing certificate...
	I0923 12:44:32.519353    6636 cli_runner.go:164] Run: docker network inspect false-579000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0923 12:44:32.597449    6636 cli_runner.go:211] docker network inspect false-579000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0923 12:44:32.606354    6636 network_create.go:284] running [docker network inspect false-579000] to gather additional debugging logs...
	I0923 12:44:32.606354    6636 cli_runner.go:164] Run: docker network inspect false-579000
	W0923 12:44:32.689358    6636 cli_runner.go:211] docker network inspect false-579000 returned with exit code 1
	I0923 12:44:32.689358    6636 network_create.go:287] error running [docker network inspect false-579000]: docker network inspect false-579000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network false-579000 not found
	I0923 12:44:32.689358    6636 network_create.go:289] output of [docker network inspect false-579000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network false-579000 not found
	
	** /stderr **
	I0923 12:44:32.698352    6636 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 12:44:32.808872    6636 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0923 12:44:32.841107    6636 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0923 12:44:32.872425    6636 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0923 12:44:32.903033    6636 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0923 12:44:32.929030    6636 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016b60f0}
	I0923 12:44:32.929030    6636 network_create.go:124] attempt to create docker network false-579000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0923 12:44:32.937012    6636 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-579000 false-579000
	I0923 12:44:33.447068    6636 network_create.go:108] docker network false-579000 192.168.85.0/24 created
	I0923 12:44:33.447068    6636 kic.go:121] calculated static IP "192.168.85.2" for the "false-579000" container
	I0923 12:44:33.469579    6636 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0923 12:44:33.556078    6636 cli_runner.go:164] Run: docker volume create false-579000 --label name.minikube.sigs.k8s.io=false-579000 --label created_by.minikube.sigs.k8s.io=true
	I0923 12:44:33.646013    6636 oci.go:103] Successfully created a docker volume false-579000
	I0923 12:44:33.656339    6636 cli_runner.go:164] Run: docker run --rm --name false-579000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-579000 --entrypoint /usr/bin/test -v false-579000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0923 12:44:31.626577    5956 pod_ready.go:103] pod "calico-kube-controllers-b8d8894fb-kf86t" in "kube-system" namespace has status "Ready":"False"
	I0923 12:44:33.121648    5956 pod_ready.go:93] pod "calico-kube-controllers-b8d8894fb-kf86t" in "kube-system" namespace has status "Ready":"True"
	I0923 12:44:33.121867    5956 pod_ready.go:82] duration metric: took 1m18.0179683s for pod "calico-kube-controllers-b8d8894fb-kf86t" in "kube-system" namespace to be "Ready" ...
	I0923 12:44:33.121867    5956 pod_ready.go:79] waiting up to 15m0s for pod "calico-node-sf6t5" in "kube-system" namespace to be "Ready" ...
	I0923 12:44:33.138382    5956 pod_ready.go:93] pod "calico-node-sf6t5" in "kube-system" namespace has status "Ready":"True"
	I0923 12:44:33.138382    5956 pod_ready.go:82] duration metric: took 16.5146ms for pod "calico-node-sf6t5" in "kube-system" namespace to be "Ready" ...
	I0923 12:44:33.138382    5956 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-n4q8p" in "kube-system" namespace to be "Ready" ...
	I0923 12:44:33.151340    5956 pod_ready.go:93] pod "coredns-7c65d6cfc9-n4q8p" in "kube-system" namespace has status "Ready":"True"
	I0923 12:44:33.151340    5956 pod_ready.go:82] duration metric: took 12.9581ms for pod "coredns-7c65d6cfc9-n4q8p" in "kube-system" namespace to be "Ready" ...
	I0923 12:44:33.151340    5956 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-pb4h7" in "kube-system" namespace to be "Ready" ...
	I0923 12:44:33.161984    5956 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-pb4h7" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-pb4h7" not found
	I0923 12:44:33.162237    5956 pod_ready.go:82] duration metric: took 10.7728ms for pod "coredns-7c65d6cfc9-pb4h7" in "kube-system" namespace to be "Ready" ...
	E0923 12:44:33.162367    5956 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-pb4h7" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-pb4h7" not found
	I0923 12:44:33.162367    5956 pod_ready.go:79] waiting up to 15m0s for pod "etcd-calico-579000" in "kube-system" namespace to be "Ready" ...
	I0923 12:44:33.181974    5956 pod_ready.go:93] pod "etcd-calico-579000" in "kube-system" namespace has status "Ready":"True"
	I0923 12:44:33.182057    5956 pod_ready.go:82] duration metric: took 19.5873ms for pod "etcd-calico-579000" in "kube-system" namespace to be "Ready" ...
	I0923 12:44:33.182057    5956 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-calico-579000" in "kube-system" namespace to be "Ready" ...
	I0923 12:44:33.312842    5956 pod_ready.go:93] pod "kube-apiserver-calico-579000" in "kube-system" namespace has status "Ready":"True"
	I0923 12:44:33.312842    5956 pod_ready.go:82] duration metric: took 130.7836ms for pod "kube-apiserver-calico-579000" in "kube-system" namespace to be "Ready" ...
	I0923 12:44:33.312842    5956 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-calico-579000" in "kube-system" namespace to be "Ready" ...
	I0923 12:44:33.713331    5956 pod_ready.go:93] pod "kube-controller-manager-calico-579000" in "kube-system" namespace has status "Ready":"True"
	I0923 12:44:33.713331    5956 pod_ready.go:82] duration metric: took 400.4858ms for pod "kube-controller-manager-calico-579000" in "kube-system" namespace to be "Ready" ...
	I0923 12:44:33.713331    5956 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-wfz6t" in "kube-system" namespace to be "Ready" ...
	I0923 12:44:34.114437    5956 pod_ready.go:93] pod "kube-proxy-wfz6t" in "kube-system" namespace has status "Ready":"True"
	I0923 12:44:34.114437    5956 pod_ready.go:82] duration metric: took 401.1029ms for pod "kube-proxy-wfz6t" in "kube-system" namespace to be "Ready" ...
	I0923 12:44:34.114437    5956 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-calico-579000" in "kube-system" namespace to be "Ready" ...
	I0923 12:44:34.513750    5956 pod_ready.go:93] pod "kube-scheduler-calico-579000" in "kube-system" namespace has status "Ready":"True"
	I0923 12:44:34.513750    5956 pod_ready.go:82] duration metric: took 399.3101ms for pod "kube-scheduler-calico-579000" in "kube-system" namespace to be "Ready" ...
	I0923 12:44:34.513750    5956 pod_ready.go:39] duration metric: took 1m19.4298591s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:44:34.513750    5956 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:44:34.528506    5956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 12:44:34.588477    5956 logs.go:276] 1 containers: [304a451854d7]
	I0923 12:44:34.599462    5956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 12:44:34.651476    5956 logs.go:276] 1 containers: [31a58c1516b6]
	I0923 12:44:34.665477    5956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 12:44:34.709462    5956 logs.go:276] 1 containers: [ece3446fb1e3]
	I0923 12:44:34.717465    5956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 12:44:34.763211    5956 logs.go:276] 1 containers: [a2fd13638bdf]
	I0923 12:44:34.772364    5956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 12:44:34.821122    5956 logs.go:276] 1 containers: [c3cf061d5f5f]
	I0923 12:44:34.829851    5956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 12:44:34.874625    5956 logs.go:276] 1 containers: [6ea5e8ad224d]
	I0923 12:44:34.882639    5956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 12:44:34.931642    5956 logs.go:276] 0 containers: []
	W0923 12:44:34.931642    5956 logs.go:278] No container was found matching "kindnet"
	I0923 12:44:34.931642    5956 logs.go:123] Gathering logs for kube-apiserver [304a451854d7] ...
	I0923 12:44:34.931642    5956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 304a451854d7"
	I0923 12:44:35.007824    5956 logs.go:123] Gathering logs for coredns [ece3446fb1e3] ...
	I0923 12:44:35.007824    5956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece3446fb1e3"
	I0923 12:44:35.075871    5956 logs.go:123] Gathering logs for kube-proxy [c3cf061d5f5f] ...
	I0923 12:44:35.075871    5956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3cf061d5f5f"
	I0923 12:44:35.126640    5956 logs.go:123] Gathering logs for kubelet ...
	I0923 12:44:35.126640    5956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 12:44:35.340646    5956 logs.go:123] Gathering logs for dmesg ...
	I0923 12:44:35.341310    5956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 12:44:35.380161    5956 logs.go:123] Gathering logs for describe nodes ...
	I0923 12:44:35.380161    5956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 12:44:35.594831    5956 logs.go:123] Gathering logs for Docker ...
	I0923 12:44:35.594831    5956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 12:44:35.759529    5956 logs.go:123] Gathering logs for container status ...
	I0923 12:44:35.759529    5956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 12:44:35.886235    5956 logs.go:123] Gathering logs for etcd [31a58c1516b6] ...
	I0923 12:44:35.886766    5956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a58c1516b6"
	I0923 12:44:35.991835    5956 logs.go:123] Gathering logs for kube-scheduler [a2fd13638bdf] ...
	I0923 12:44:35.991835    5956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd13638bdf"
	I0923 12:44:36.101663    5956 logs.go:123] Gathering logs for kube-controller-manager [6ea5e8ad224d] ...
	I0923 12:44:36.101663    5956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea5e8ad224d"
	I0923 12:44:39.717477    4228 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60042/healthz ...
	I0923 12:44:39.735476    4228 api_server.go:279] https://127.0.0.1:60042/healthz returned 200:
	ok
	I0923 12:44:39.739491    4228 out.go:201] 
	W0923 12:44:39.743472    4228 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0923 12:44:39.743472    4228 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0923 12:44:39.743472    4228 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0923 12:44:39.743472    4228 out.go:270] * 
	W0923 12:44:39.745491    4228 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 12:44:39.750477    4228 out.go:201] 
	I0923 12:44:36.584602    6636 cli_runner.go:217] Completed: docker run --rm --name false-579000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-579000 --entrypoint /usr/bin/test -v false-579000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (2.9281637s)
	I0923 12:44:36.584816    6636 oci.go:107] Successfully prepared a docker volume false-579000
	I0923 12:44:36.584816    6636 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:44:36.584816    6636 kic.go:194] Starting extracting preloaded images to volume ...
	I0923 12:44:36.595266    6636 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-579000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0923 12:44:38.729951    5956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:44:38.758665    5956 api_server.go:72] duration metric: took 1m27.274971s to wait for apiserver process to appear ...
	I0923 12:44:38.758665    5956 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:44:38.766661    5956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 12:44:38.824745    5956 logs.go:276] 1 containers: [304a451854d7]
	I0923 12:44:38.836701    5956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 12:44:38.886693    5956 logs.go:276] 1 containers: [31a58c1516b6]
	I0923 12:44:38.894687    5956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 12:44:38.943699    5956 logs.go:276] 1 containers: [ece3446fb1e3]
	I0923 12:44:38.952690    5956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 12:44:38.996675    5956 logs.go:276] 1 containers: [a2fd13638bdf]
	I0923 12:44:39.004690    5956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 12:44:39.052594    5956 logs.go:276] 1 containers: [c3cf061d5f5f]
	I0923 12:44:39.062509    5956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 12:44:39.115317    5956 logs.go:276] 1 containers: [6ea5e8ad224d]
	I0923 12:44:39.123323    5956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 12:44:39.175693    5956 logs.go:276] 0 containers: []
	W0923 12:44:39.175693    5956 logs.go:278] No container was found matching "kindnet"
	I0923 12:44:39.175801    5956 logs.go:123] Gathering logs for etcd [31a58c1516b6] ...
	I0923 12:44:39.175801    5956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a58c1516b6"
	I0923 12:44:39.253811    5956 logs.go:123] Gathering logs for coredns [ece3446fb1e3] ...
	I0923 12:44:39.253811    5956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece3446fb1e3"
	I0923 12:44:39.299804    5956 logs.go:123] Gathering logs for kube-scheduler [a2fd13638bdf] ...
	I0923 12:44:39.299804    5956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd13638bdf"
	I0923 12:44:39.358864    5956 logs.go:123] Gathering logs for Docker ...
	I0923 12:44:39.358864    5956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 12:44:39.452567    5956 logs.go:123] Gathering logs for container status ...
	I0923 12:44:39.452567    5956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 12:44:39.544732    5956 logs.go:123] Gathering logs for kubelet ...
	I0923 12:44:39.544732    5956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 12:44:39.715479    5956 logs.go:123] Gathering logs for dmesg ...
	I0923 12:44:39.715479    5956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 12:44:39.750477    5956 logs.go:123] Gathering logs for describe nodes ...
	I0923 12:44:39.750477    5956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 12:44:39.980880    5956 logs.go:123] Gathering logs for kube-apiserver [304a451854d7] ...
	I0923 12:44:39.980880    5956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 304a451854d7"
	I0923 12:44:40.051011    5956 logs.go:123] Gathering logs for kube-proxy [c3cf061d5f5f] ...
	I0923 12:44:40.051011    5956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3cf061d5f5f"
	I0923 12:44:40.114379    5956 logs.go:123] Gathering logs for kube-controller-manager [6ea5e8ad224d] ...
	I0923 12:44:40.114379    5956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ea5e8ad224d"
	
	
	==> Docker <==
	Sep 23 12:39:53 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:39:53.002022937Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=bc562c3ebf8ea584 traceID=12bda907be02af66096253dac142f012
	Sep 23 12:39:53 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:39:53.248475389Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4" spanID=bc562c3ebf8ea584 traceID=12bda907be02af66096253dac142f012
	Sep 23 12:39:53 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:39:53.248806723Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=bc562c3ebf8ea584 traceID=12bda907be02af66096253dac142f012
	Sep 23 12:39:53 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:39:53.248880330Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" spanID=bc562c3ebf8ea584 traceID=12bda907be02af66096253dac142f012
	Sep 23 12:40:20 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:40:20.721922709Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=0511e714d25fde9c traceID=8cc1750be37c17cfd19d9306a6749264
	Sep 23 12:40:20 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:40:20.722091930Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=0511e714d25fde9c traceID=8cc1750be37c17cfd19d9306a6749264
	Sep 23 12:40:20 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:40:20.731374200Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=0511e714d25fde9c traceID=8cc1750be37c17cfd19d9306a6749264
	Sep 23 12:40:21 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:40:21.920562328Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=9f7d1066657fa13c traceID=18a61b7ffc3f4ad288dc273093ad6774
	Sep 23 12:40:22 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:40:22.117986602Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4" spanID=9f7d1066657fa13c traceID=18a61b7ffc3f4ad288dc273093ad6774
	Sep 23 12:40:22 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:40:22.118278039Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=9f7d1066657fa13c traceID=18a61b7ffc3f4ad288dc273093ad6774
	Sep 23 12:40:22 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:40:22.118338946Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" spanID=9f7d1066657fa13c traceID=18a61b7ffc3f4ad288dc273093ad6774
	Sep 23 12:41:02 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:41:02.938691674Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=37571850597be17c traceID=25654037f3c9a76c504a593287c44d1d
	Sep 23 12:41:03 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:41:03.157051582Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4" spanID=37571850597be17c traceID=25654037f3c9a76c504a593287c44d1d
	Sep 23 12:41:03 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:41:03.157259303Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=37571850597be17c traceID=25654037f3c9a76c504a593287c44d1d
	Sep 23 12:41:03 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:41:03.157330211Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" spanID=37571850597be17c traceID=25654037f3c9a76c504a593287c44d1d
	Sep 23 12:41:43 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:41:43.753529874Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=19660d6f5b03a159 traceID=756b1e3e87026f4d85c5bf8100661067
	Sep 23 12:41:43 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:41:43.754006225Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=19660d6f5b03a159 traceID=756b1e3e87026f4d85c5bf8100661067
	Sep 23 12:41:43 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:41:43.765698958Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=19660d6f5b03a159 traceID=756b1e3e87026f4d85c5bf8100661067
	Sep 23 12:42:30 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:42:30.984759571Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=960a87225918c6d7 traceID=6ce7b0164d1c4b176fe2ef17e8d2ae86
	Sep 23 12:42:31 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:42:31.206225404Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4" spanID=960a87225918c6d7 traceID=6ce7b0164d1c4b176fe2ef17e8d2ae86
	Sep 23 12:42:31 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:42:31.206563840Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=960a87225918c6d7 traceID=6ce7b0164d1c4b176fe2ef17e8d2ae86
	Sep 23 12:42:31 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:42:31.206610445Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" spanID=960a87225918c6d7 traceID=6ce7b0164d1c4b176fe2ef17e8d2ae86
	Sep 23 12:44:27 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:44:27.784911457Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=44f694be499ca448 traceID=2b4b3935b7906c7e4fe7cbd2308f2815
	Sep 23 12:44:27 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:44:27.785164886Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=44f694be499ca448 traceID=2b4b3935b7906c7e4fe7cbd2308f2815
	Sep 23 12:44:28 old-k8s-version-694600 dockerd[1463]: time="2024-09-23T12:44:28.019769303Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=44f694be499ca448 traceID=2b4b3935b7906c7e4fe7cbd2308f2815
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5eb85a2791fd2       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        5 minutes ago       Running             kubernetes-dashboard      0                   2231baebe0719       kubernetes-dashboard-cd95d586-bk8ls
	4b8130a0a6312       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       2                   5764a6a5e43b2       storage-provisioner
	ac17c0c4ecffd       bfe3a36ebd252                                                                                         5 minutes ago       Running             coredns                   1                   3b9e6eff3f245       coredns-74ff55c5b-xs5mb
	f00486aecb7f9       56cc512116c8f                                                                                         5 minutes ago       Running             busybox                   1                   35bbd70f3d721       busybox
	ae76dbbad5dfb       10cc881966cfd                                                                                         5 minutes ago       Running             kube-proxy                1                   1942f04a6c93b       kube-proxy-nf8m5
	9420ceb6a8a95       6e38f40d628db                                                                                         5 minutes ago       Exited              storage-provisioner       1                   5764a6a5e43b2       storage-provisioner
	cbaa7f55c1cfa       b9fa1895dcaa6                                                                                         6 minutes ago       Running             kube-controller-manager   1                   ba583445183ed       kube-controller-manager-old-k8s-version-694600
	6f9ee2379541c       3138b6e3d4712                                                                                         6 minutes ago       Running             kube-scheduler            1                   7d0f4c0e596e1       kube-scheduler-old-k8s-version-694600
	bf2bddf93da4e       ca9843d3b5454                                                                                         6 minutes ago       Running             kube-apiserver            1                   fc9afd6fdf377       kube-apiserver-old-k8s-version-694600
	db948e782f564       0369cf4303ffd                                                                                         6 minutes ago       Running             etcd                      1                   2fcc4d1621a29       etcd-old-k8s-version-694600
	de69ff0211204       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   7 minutes ago       Exited              busybox                   0                   6f4e6b8c18962       busybox
	db8367477ac16       bfe3a36ebd252                                                                                         9 minutes ago       Exited              coredns                   0                   05f0b9906b1e6       coredns-74ff55c5b-xs5mb
	606305ef153c6       10cc881966cfd                                                                                         9 minutes ago       Exited              kube-proxy                0                   1bafd44ddcbc5       kube-proxy-nf8m5
	1382815ffdc3c       b9fa1895dcaa6                                                                                         9 minutes ago       Exited              kube-controller-manager   0                   4aabd63d6307e       kube-controller-manager-old-k8s-version-694600
	99e36abc5feb2       ca9843d3b5454                                                                                         9 minutes ago       Exited              kube-apiserver            0                   687e3798a5269       kube-apiserver-old-k8s-version-694600
	585ea5a5976b2       3138b6e3d4712                                                                                         9 minutes ago       Exited              kube-scheduler            0                   9764f5228cb61       kube-scheduler-old-k8s-version-694600
	a7d64ac5d6857       0369cf4303ffd                                                                                         9 minutes ago       Exited              etcd                      0                   7c33b44d00927       etcd-old-k8s-version-694600
	
	
	==> coredns [ac17c0c4ecff] <==
	I0923 12:39:17.546271       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-23 12:38:56.503685023 +0000 UTC m=+0.206425646) (total time: 21.046126661s):
	Trace[2019727887]: [21.046126661s] [21.046126661s] END
	E0923 12:39:17.546415       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I0923 12:39:17.546350       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-23 12:38:56.502243282 +0000 UTC m=+0.204983905) (total time: 21.047731719s):
	Trace[1427131847]: [21.047731719s] [21.047731719s] END
	E0923 12:39:17.546448       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I0923 12:39:17.546832       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-23 12:38:56.503192975 +0000 UTC m=+0.205933498) (total time: 21.047007149s):
	Trace[911902081]: [21.047007149s] [21.047007149s] END
	E0923 12:39:17.547446       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 512bc0e06a520fa44f35dc15de10fdd6
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:55870 - 60746 "HINFO IN 2807392057809506795.3253440671122845835. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.069371227s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [db8367477ac1] <==
	I0923 12:35:46.530145       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-23 12:35:25.505087803 +0000 UTC m=+0.163205329) (total time: 21.027387842s):
	Trace[2019727887]: [21.027387842s] [21.027387842s] END
	E0923 12:35:46.530193       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I0923 12:35:46.530489       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-23 12:35:25.505202515 +0000 UTC m=+0.163320041) (total time: 21.02788489s):
	Trace[939984059]: [21.02788489s] [21.02788489s] END
	E0923 12:35:46.530507       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I0923 12:35:46.534782       1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-23 12:35:25.505087903 +0000 UTC m=+0.163205529) (total time: 21.03227772s):
	Trace[1474941318]: [21.03227772s] [21.03227772s] END
	E0923 12:35:46.534890       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0923 12:37:37.391017       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=201&timeout=6m5s&timeoutSeconds=365&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
	E0923 12:37:37.486487       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=615&timeout=6m27s&timeoutSeconds=387&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
	E0923 12:37:37.390610       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=599&timeout=8m23s&timeoutSeconds=503&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 512bc0e06a520fa44f35dc15de10fdd6
	[INFO] Reloading complete
	[INFO] 127.0.0.1:57367 - 41141 "HINFO IN 3182396505344991187.8908902117498816762. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.06826722s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-694600
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-694600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=old-k8s-version-694600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T12_35_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:35:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-694600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 12:44:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 12:40:10 +0000   Mon, 23 Sep 2024 12:34:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 12:40:10 +0000   Mon, 23 Sep 2024 12:34:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 12:40:10 +0000   Mon, 23 Sep 2024 12:34:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 12:40:10 +0000   Mon, 23 Sep 2024 12:35:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-694600
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868688Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868688Ki
	  pods:               110
	System Info:
	  Machine ID:                 0959204301484a7d80d5c5d7316e5e8f
	  System UUID:                0959204301484a7d80d5c5d7316e5e8f
	  Boot ID:                    39082465-ae0b-4792-bc81-a99f7997c7d1
	  Kernel Version:             5.15.153.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m26s
	  kube-system                 coredns-74ff55c5b-xs5mb                           100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m24s
	  kube-system                 etcd-old-k8s-version-694600                       100m (0%)     0 (0%)      100Mi (0%)       0 (0%)         9m37s
	  kube-system                 kube-apiserver-old-k8s-version-694600             250m (1%)     0 (0%)      0 (0%)           0 (0%)         9m37s
	  kube-system                 kube-controller-manager-old-k8s-version-694600    200m (1%)     0 (0%)      0 (0%)           0 (0%)         9m37s
	  kube-system                 kube-proxy-nf8m5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m23s
	  kube-system                 kube-scheduler-old-k8s-version-694600             100m (0%)     0 (0%)      0 (0%)           0 (0%)         9m37s
	  kube-system                 metrics-server-9975d5f86-vmdbz                    100m (0%)     0 (0%)      200Mi (0%)       0 (0%)         7m7s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-2qtq6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-bk8ls               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (5%)   0 (0%)
	  memory             370Mi (1%)  170Mi (0%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  9m55s (x6 over 9m56s)  kubelet     Node old-k8s-version-694600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m55s (x6 over 9m56s)  kubelet     Node old-k8s-version-694600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m55s (x5 over 9m56s)  kubelet     Node old-k8s-version-694600 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m38s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m37s                  kubelet     Node old-k8s-version-694600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m37s                  kubelet     Node old-k8s-version-694600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m37s                  kubelet     Node old-k8s-version-694600 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m37s                  kubelet     Node old-k8s-version-694600 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m37s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m27s                  kubelet     Node old-k8s-version-694600 status is now: NodeReady
	  Normal  Starting                 9m18s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m9s                   kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m9s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m8s (x8 over 6m9s)    kubelet     Node old-k8s-version-694600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m8s (x7 over 6m9s)    kubelet     Node old-k8s-version-694600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m8s (x8 over 6m9s)    kubelet     Node old-k8s-version-694600 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m47s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +8.880856] tmpfs: Unknown parameter 'noswap'
	[ +15.816418] tmpfs: Unknown parameter 'noswap'
	[  +9.927202] tmpfs: Unknown parameter 'noswap'
	[Sep23 12:34] tmpfs: Unknown parameter 'noswap'
	[ +40.486929] tmpfs: Unknown parameter 'noswap'
	[ +10.134368] tmpfs: Unknown parameter 'noswap'
	[Sep23 12:35] tmpfs: Unknown parameter 'noswap'
	[  +9.222755] tmpfs: Unknown parameter 'noswap'
	[  +8.102585] tmpfs: Unknown parameter 'noswap'
	[ +11.051964] tmpfs: Unknown parameter 'noswap'
	[Sep23 12:36] tmpfs: Unknown parameter 'noswap'
	[  +6.170726] tmpfs: Unknown parameter 'noswap'
	[  +8.104595] tmpfs: Unknown parameter 'noswap'
	[ +17.643986] tmpfs: Unknown parameter 'noswap'
	[  +3.828445] hrtimer: interrupt took 1048310 ns
	[Sep23 12:37] tmpfs: Unknown parameter 'noswap'
	[Sep23 12:38] tmpfs: Unknown parameter 'noswap'
	[Sep23 12:39] tmpfs: Unknown parameter 'noswap'
	[Sep23 12:41] tmpfs: Unknown parameter 'noswap'
	[ +24.182596] tmpfs: Unknown parameter 'noswap'
	[Sep23 12:42] tmpfs: Unknown parameter 'noswap'
	[ +11.312667] tmpfs: Unknown parameter 'noswap'
	[ +13.937764] tmpfs: Unknown parameter 'noswap'
	[Sep23 12:43] tmpfs: Unknown parameter 'noswap'
	[Sep23 12:44] tmpfs: Unknown parameter 'noswap'
	
	
	==> etcd [a7d64ac5d685] <==
	2024-09-23 12:36:58.487486 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true " with result "range_response_count:0 size:7" took too long (373.57181ms) to execute
	2024-09-23 12:36:58.487879 W | etcdserver: read-only range request "key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (260.861311ms) to execute
	2024-09-23 12:36:58.781329 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (133.643592ms) to execute
	2024-09-23 12:37:00.882977 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:37:10.866923 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:37:16.391073 W | etcdserver: read-only range request "key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true " with result "range_response_count:0 size:7" took too long (183.501048ms) to execute
	2024-09-23 12:37:18.255244 W | etcdserver: request "header:<ID:13873780788406946430 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/default/busybox\" mod_revision:558 > success:<request_put:<key:\"/registry/pods/default/busybox\" value_size:1171 >> failure:<request_range:<key:\"/registry/pods/default/busybox\" > >>" with result "size:16" took too long (179.155412ms) to execute
	2024-09-23 12:37:18.256254 W | etcdserver: read-only range request "key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" " with result "range_response_count:1 size:1224" took too long (430.927025ms) to execute
	2024-09-23 12:37:18.575635 W | etcdserver: read-only range request "key:\"/registry/pods/default/busybox\" " with result "range_response_count:1 size:1224" took too long (306.562021ms) to execute
	2024-09-23 12:37:18.575759 W | etcdserver: request "header:<ID:13873780788406946436 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:4089921ee0a28a83>" with result "size:41" took too long (127.568625ms) to execute
	2024-09-23 12:37:20.866839 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:37:20.890744 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.466957185s) to execute
	2024-09-23 12:37:20.891021 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.260751153s) to execute
	2024-09-23 12:37:20.891130 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1120" took too long (610.652793ms) to execute
	2024-09-23 12:37:20.891289 W | etcdserver: read-only range request "key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" " with result "range_response_count:1 size:2100" took too long (2.080161836s) to execute
	2024-09-23 12:37:21.019263 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (101.120967ms) to execute
	2024-09-23 12:37:24.993541 W | etcdserver: read-only range request "key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" " with result "range_response_count:1 size:2100" took too long (183.080326ms) to execute
	2024-09-23 12:37:26.746030 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (112.258444ms) to execute
	2024-09-23 12:37:26.746213 W | etcdserver: read-only range request "key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" count_only:true " with result "range_response_count:0 size:7" took too long (143.591365ms) to execute
	2024-09-23 12:37:30.870269 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:37:37.584104 W | etcdserver: request "header:<ID:13873780788406946598 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.103.2\" mod_revision:576 > success:<request_delete_range:<key:\"/registry/masterleases/192.168.103.2\" > > failure:<request_range:<key:\"/registry/masterleases/192.168.103.2\" > >>" with result "size:18" took too long (100.286584ms) to execute
	2024-09-23 12:37:37.587002 N | pkg/osutil: received terminated signal, shutting down...
	WARNING: 2024/09/23 12:37:37 grpc: addrConn.createTransport failed to connect to {192.168.103.2:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.103.2:2379: connect: connection refused". Reconnecting...
	WARNING: 2024/09/23 12:37:37 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	2024-09-23 12:37:37.598556 I | etcdserver: skipped leadership transfer for single voting member cluster
	
	
	==> etcd [db948e782f56] <==
	2024-09-23 12:42:26.898801 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:421" took too long (362.989716ms) to execute
	2024-09-23 12:42:26.899299 W | etcdserver: read-only range request "key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" count_only:true " with result "range_response_count:0 size:7" took too long (260.876795ms) to execute
	2024-09-23 12:42:27.755182 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-9975d5f86-vmdbz\" " with result "range_response_count:1 size:4053" took too long (302.92782ms) to execute
	2024-09-23 12:42:29.743944 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-9975d5f86-vmdbz\" " with result "range_response_count:1 size:4053" took too long (291.150663ms) to execute
	2024-09-23 12:42:34.741498 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:42:44.738517 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:42:54.738641 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:43:04.738358 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:43:14.735509 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:43:24.735720 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:43:28.289937 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (173.385974ms) to execute
	2024-09-23 12:43:30.293137 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (177.952971ms) to execute
	2024-09-23 12:43:31.257353 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (141.01765ms) to execute
	2024-09-23 12:43:32.227591 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (107.212171ms) to execute
	2024-09-23 12:43:34.736330 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:43:44.732619 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:43:54.733711 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:44:04.733322 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:44:14.729878 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:44:24.730063 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:44:31.287729 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (181.134904ms) to execute
	2024-09-23 12:44:34.730511 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:44:37.018414 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (136.239764ms) to execute
	2024-09-23 12:44:39.074303 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true " with result "range_response_count:0 size:7" took too long (136.978037ms) to execute
	2024-09-23 12:44:41.586250 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (246.69888ms) to execute
	
	
	==> kernel <==
	 12:44:43 up  1:45,  0 users,  load average: 9.06, 7.75, 6.66
	Linux old-k8s-version-694600 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [99e36abc5feb] <==
	W0923 12:37:37.597328       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	I0923 12:37:37.597336       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0923 12:37:37.597373       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	W0923 12:37:37.597287       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	I0923 12:37:37.597393       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0923 12:37:37.597481       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	W0923 12:37:37.595662       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 12:37:37.597560       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 12:37:37.597143       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 12:37:37.597600       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 12:37:37.597680       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 12:37:37.597719       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 12:37:37.597771       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 12:37:37.597778       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 12:37:37.597203       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 12:37:37.597379       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 12:37:37.597846       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 12:37:37.598059       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 12:37:37.598087       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 12:37:37.598090       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 12:37:37.598131       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 12:37:37.598132       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 12:37:37.598358       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	I0923 12:37:37.682891       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	W0923 12:37:37.683227       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	
	==> kube-apiserver [bf2bddf93da4] <==
	I0923 12:42:00.952359       1 trace.go:205] Trace[1637081081]: "Get" url:/api/v1/namespaces/kube-system/pods/metrics-server-9975d5f86-vmdbz,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,client:192.168.103.1 (23-Sep-2024 12:41:59.026) (total time: 1926ms):
	Trace[1637081081]: ---"About to write a response" 1925ms (12:42:00.951)
	Trace[1637081081]: [1.926127301s] [1.926127301s] END
	I0923 12:42:00.952089       1 trace.go:205] Trace[1857767128]: "Patch" url:/api/v1/namespaces/kube-system/pods/metrics-server-9975d5f86-vmdbz/status,user-agent:kubelet/v1.20.0 (linux/amd64) kubernetes/af46c47,client:192.168.103.2 (23-Sep-2024 12:41:59.021) (total time: 1930ms):
	Trace[1857767128]: ---"Object stored in database" 1926ms (12:42:00.951)
	Trace[1857767128]: [1.930792493s] [1.930792493s] END
	I0923 12:42:07.015302       1 client.go:360] parsed scheme: "passthrough"
	I0923 12:42:07.015810       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 12:42:07.015832       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0923 12:42:44.363954       1 client.go:360] parsed scheme: "passthrough"
	I0923 12:42:44.364087       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 12:42:44.364099       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0923 12:43:27.106987       1 client.go:360] parsed scheme: "passthrough"
	I0923 12:43:27.107047       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 12:43:27.107181       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0923 12:43:49.163439       1 handler_proxy.go:102] no RequestInfo found in the context
	E0923 12:43:49.163659       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0923 12:43:49.163680       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0923 12:44:01.866384       1 client.go:360] parsed scheme: "passthrough"
	I0923 12:44:01.866742       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 12:44:01.866765       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0923 12:44:32.734531       1 client.go:360] parsed scheme: "passthrough"
	I0923 12:44:32.734664       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 12:44:32.734676       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [1382815ffdc3] <==
	I0923 12:35:19.930676       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0923 12:35:19.932257       1 shared_informer.go:247] Caches are synced for expand 
	I0923 12:35:19.996577       1 shared_informer.go:247] Caches are synced for attach detach 
	I0923 12:35:19.997383       1 shared_informer.go:247] Caches are synced for disruption 
	I0923 12:35:19.997402       1 disruption.go:339] Sending events to api server.
	I0923 12:35:20.023443       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-xs5mb"
	I0923 12:35:20.025170       1 range_allocator.go:373] Set node old-k8s-version-694600 PodCIDR to [10.244.0.0/24]
	E0923 12:35:20.025866       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0923 12:35:20.096592       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0923 12:35:20.096945       1 shared_informer.go:247] Caches are synced for stateful set 
	I0923 12:35:20.196735       1 shared_informer.go:247] Caches are synced for resource quota 
	I0923 12:35:20.196869       1 shared_informer.go:247] Caches are synced for resource quota 
	I0923 12:35:20.221687       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-hq2qr"
	I0923 12:35:20.332984       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0923 12:35:20.420702       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nf8m5"
	I0923 12:35:20.696758       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0923 12:35:20.696795       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0923 12:35:20.696803       1 shared_informer.go:247] Caches are synced for garbage collector 
	E0923 12:35:20.819398       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"2b6c202d-36a5-4109-9fad-cb7be142714d", ResourceVersion:"261", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63862691704, loc:(*time.Location)(0x6f2f340)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0017de000), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0017de020)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0xc0017de040), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001788380), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0017de
060), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0017de080), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001532f40)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001105680), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000dff358), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0001dd9d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001d001b8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000dff3a8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0923 12:35:23.914448       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0923 12:35:24.000086       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-hq2qr"
	I0923 12:37:35.034368       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0923 12:37:35.203717       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	E0923 12:37:35.212382       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0923 12:37:36.104870       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-vmdbz"
	
	
	==> kube-controller-manager [cbaa7f55c1cf] <==
	W0923 12:40:18.897660       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 12:40:44.993960       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 12:40:50.545613       1 request.go:655] Throttling request took 1.047890828s, request: GET:https://192.168.103.2:8443/apis/certificates.k8s.io/v1beta1?timeout=32s
	W0923 12:40:51.398118       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 12:41:15.498107       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 12:41:23.046632       1 request.go:655] Throttling request took 1.047369935s, request: GET:https://192.168.103.2:8443/apis/apiregistration.k8s.io/v1?timeout=32s
	W0923 12:41:23.899779       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 12:41:45.998856       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 12:41:55.548510       1 request.go:655] Throttling request took 1.045758368s, request: GET:https://192.168.103.2:8443/apis/batch/v1beta1?timeout=32s
	W0923 12:41:56.400880       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 12:42:16.499484       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 12:42:28.049181       1 request.go:655] Throttling request took 1.048254497s, request: GET:https://192.168.103.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0923 12:42:28.900788       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 12:42:47.000297       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 12:43:00.549058       1 request.go:655] Throttling request took 1.047411611s, request: GET:https://192.168.103.2:8443/apis/storage.k8s.io/v1?timeout=32s
	W0923 12:43:01.401271       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 12:43:17.502457       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 12:43:33.049267       1 request.go:655] Throttling request took 1.046138671s, request: GET:https://192.168.103.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W0923 12:43:33.901416       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 12:43:48.001962       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 12:44:05.546282       1 request.go:655] Throttling request took 1.047758077s, request: GET:https://192.168.103.2:8443/apis/coordination.k8s.io/v1beta1?timeout=32s
	W0923 12:44:06.398643       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 12:44:18.502602       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 12:44:38.046563       1 request.go:655] Throttling request took 1.047457714s, request: GET:https://192.168.103.2:8443/apis/node.k8s.io/v1beta1?timeout=32s
	W0923 12:44:38.898375       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [606305ef153c] <==
	I0923 12:35:25.507474       1 node.go:172] Successfully retrieved node IP: 192.168.103.2
	I0923 12:35:25.508427       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.103.2), assume IPv4 operation
	W0923 12:35:25.637020       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0923 12:35:25.637805       1 server_others.go:185] Using iptables Proxier.
	I0923 12:35:25.638659       1 server.go:650] Version: v1.20.0
	I0923 12:35:25.639899       1 config.go:224] Starting endpoint slice config controller
	I0923 12:35:25.640179       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0923 12:35:25.640499       1 config.go:315] Starting service config controller
	I0923 12:35:25.640515       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0923 12:35:25.740888       1 shared_informer.go:247] Caches are synced for service config 
	I0923 12:35:25.741005       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [ae76dbbad5df] <==
	I0923 12:38:56.715301       1 node.go:172] Successfully retrieved node IP: 192.168.103.2
	I0923 12:38:56.715579       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.103.2), assume IPv4 operation
	W0923 12:38:56.879551       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0923 12:38:56.879829       1 server_others.go:185] Using iptables Proxier.
	I0923 12:38:56.881218       1 server.go:650] Version: v1.20.0
	I0923 12:38:56.882632       1 config.go:224] Starting endpoint slice config controller
	I0923 12:38:56.882679       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0923 12:38:56.882721       1 config.go:315] Starting service config controller
	I0923 12:38:56.882733       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0923 12:38:56.983351       1 shared_informer.go:247] Caches are synced for service config 
	I0923 12:38:56.983559       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [585ea5a5976b] <==
	E0923 12:35:00.408550       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 12:35:00.408561       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 12:35:00.409090       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 12:35:00.408622       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 12:35:00.408844       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 12:35:00.408966       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 12:35:00.408978       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 12:35:00.409192       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 12:35:00.409207       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 12:35:00.409218       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 12:35:00.409921       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 12:35:00.409971       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 12:35:01.392177       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 12:35:01.420481       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 12:35:01.502671       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 12:35:01.504118       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 12:35:01.560604       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 12:35:01.648084       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 12:35:01.698181       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 12:35:01.698597       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 12:35:01.810202       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 12:35:01.840140       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 12:35:01.862857       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 12:35:01.998411       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0923 12:35:04.304374       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [6f9ee2379541] <==
	I0923 12:38:43.380649       1 serving.go:331] Generated self-signed cert in-memory
	W0923 12:38:48.380628       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0923 12:38:48.380817       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0923 12:38:48.380840       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0923 12:38:48.380847       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0923 12:38:48.593186       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0923 12:38:48.677902       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 12:38:48.677941       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 12:38:48.677978       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0923 12:38:48.978485       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 23 12:42:39 old-k8s-version-694600 kubelet[1893]: E0923 12:42:39.657460    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:42:45 old-k8s-version-694600 kubelet[1893]: E0923 12:42:45.658547    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 23 12:42:51 old-k8s-version-694600 kubelet[1893]: E0923 12:42:51.656478    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:42:58 old-k8s-version-694600 kubelet[1893]: E0923 12:42:58.657078    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 23 12:43:06 old-k8s-version-694600 kubelet[1893]: E0923 12:43:06.656952    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:43:09 old-k8s-version-694600 kubelet[1893]: E0923 12:43:09.654057    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 23 12:43:17 old-k8s-version-694600 kubelet[1893]: E0923 12:43:17.656711    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:43:22 old-k8s-version-694600 kubelet[1893]: E0923 12:43:22.657075    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 23 12:43:31 old-k8s-version-694600 kubelet[1893]: E0923 12:43:31.654949    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:43:34 old-k8s-version-694600 kubelet[1893]: E0923 12:43:34.668314    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 23 12:43:34 old-k8s-version-694600 kubelet[1893]: W0923 12:43:34.680215    1893 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Sep 23 12:43:34 old-k8s-version-694600 kubelet[1893]: W0923 12:43:34.681934    1893 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
	Sep 23 12:43:46 old-k8s-version-694600 kubelet[1893]: E0923 12:43:46.652277    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:43:48 old-k8s-version-694600 kubelet[1893]: E0923 12:43:48.653714    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 23 12:44:00 old-k8s-version-694600 kubelet[1893]: E0923 12:44:00.654685    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:44:02 old-k8s-version-694600 kubelet[1893]: E0923 12:44:02.652220    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 23 12:44:13 old-k8s-version-694600 kubelet[1893]: E0923 12:44:13.649211    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:44:15 old-k8s-version-694600 kubelet[1893]: E0923 12:44:15.648344    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 23 12:44:28 old-k8s-version-694600 kubelet[1893]: E0923 12:44:28.020956    1893 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host
	Sep 23 12:44:28 old-k8s-version-694600 kubelet[1893]: E0923 12:44:28.021097    1893 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host
	Sep 23 12:44:28 old-k8s-version-694600 kubelet[1893]: E0923 12:44:28.021280    1893 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-ghn8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exe
c:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-vmdbz_kube-system(ae8497
38-e0cf-4a30-9b07-5cddd83db4b6): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host
	Sep 23 12:44:28 old-k8s-version-694600 kubelet[1893]: E0923 12:44:28.021315    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Sep 23 12:44:29 old-k8s-version-694600 kubelet[1893]: E0923 12:44:29.649676    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 23 12:44:39 old-k8s-version-694600 kubelet[1893]: E0923 12:44:39.645426    1893 pod_workers.go:191] Error syncing pod ae849738-e0cf-4a30-9b07-5cddd83db4b6 ("metrics-server-9975d5f86-vmdbz_kube-system(ae849738-e0cf-4a30-9b07-5cddd83db4b6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:44:41 old-k8s-version-694600 kubelet[1893]: E0923 12:44:41.668520    1893 pod_workers.go:191] Error syncing pod ed366f72-9f74-45c1-9866-9faee4d9ffb0 ("dashboard-metrics-scraper-8d5bb5db8-2qtq6_kubernetes-dashboard(ed366f72-9f74-45c1-9866-9faee4d9ffb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [5eb85a2791fd] <==
	2024/09/23 12:39:39 Using namespace: kubernetes-dashboard
	2024/09/23 12:39:39 Using in-cluster config to connect to apiserver
	2024/09/23 12:39:39 Using secret token for csrf signing
	2024/09/23 12:39:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/23 12:39:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/23 12:39:39 Successful initial request to the apiserver, version: v1.20.0
	2024/09/23 12:39:39 Generating JWE encryption key
	2024/09/23 12:39:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/23 12:39:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/23 12:39:40 Initializing JWE encryption key from synchronized object
	2024/09/23 12:39:40 Creating in-cluster Sidecar client
	2024/09/23 12:39:40 Serving insecurely on HTTP port: 9090
	2024/09/23 12:39:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:40:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:40:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:41:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:41:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:42:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:42:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:43:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:43:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:44:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:44:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:39:39 Starting overwatch
	
	
	==> storage-provisioner [4b8130a0a631] <==
	I0923 12:39:31.497855       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 12:39:31.575050       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 12:39:31.575114       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 12:39:49.074337       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 12:39:49.075691       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-694600_3ce2ca5b-acf7-427e-8f5e-d77797b64e35!
	I0923 12:39:49.075735       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a5e0b200-de87-4f47-a244-ebf98d0c4ab8", APIVersion:"v1", ResourceVersion:"822", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-694600_3ce2ca5b-acf7-427e-8f5e-d77797b64e35 became leader
	I0923 12:39:49.177344       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-694600_3ce2ca5b-acf7-427e-8f5e-d77797b64e35!
	
	
	==> storage-provisioner [9420ceb6a8a9] <==
	I0923 12:38:55.892618       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0923 12:39:17.049535       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-694600 -n old-k8s-version-694600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-694600 -n old-k8s-version-694600: (1.0016067s)
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-694600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-vmdbz dashboard-metrics-scraper-8d5bb5db8-2qtq6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-694600 describe pod metrics-server-9975d5f86-vmdbz dashboard-metrics-scraper-8d5bb5db8-2qtq6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-694600 describe pod metrics-server-9975d5f86-vmdbz dashboard-metrics-scraper-8d5bb5db8-2qtq6: exit status 1 (571.5259ms)

                                                
                                                
** stderr ** 
	E0923 12:44:46.870530    1668 memcache.go:287] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	E0923 12:44:46.981543    1668 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	E0923 12:44:47.001538    1668 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	E0923 12:44:47.017542    1668 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	Error from server (NotFound): pods "metrics-server-9975d5f86-vmdbz" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-8d5bb5db8-2qtq6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-694600 describe pod metrics-server-9975d5f86-vmdbz dashboard-metrics-scraper-8d5bb5db8-2qtq6: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (417.12s)

                                                
                                    

Test pass (311/339)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.04
4 TestDownloadOnly/v1.20.0/preload-exists 0.09
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.29
9 TestDownloadOnly/v1.20.0/DeleteAll 1.17
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.07
12 TestDownloadOnly/v1.31.1/json-events 7.96
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.25
18 TestDownloadOnly/v1.31.1/DeleteAll 1.4
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.87
20 TestDownloadOnlyKic 3.3
21 TestBinaryMirror 2.93
22 TestOffline 120.29
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.33
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.33
27 TestAddons/Setup 524.55
29 TestAddons/serial/Volcano 55.51
31 TestAddons/serial/GCPAuth/Namespaces 0.36
35 TestAddons/parallel/InspektorGadget 12.04
36 TestAddons/parallel/MetricsServer 8.74
38 TestAddons/parallel/CSI 61.89
39 TestAddons/parallel/Headlamp 30.91
40 TestAddons/parallel/CloudSpanner 7.38
41 TestAddons/parallel/LocalPath 64.89
42 TestAddons/parallel/NvidiaDevicePlugin 7.88
43 TestAddons/parallel/Yakd 14.09
44 TestAddons/StoppedEnableDisable 13.74
45 TestCertOptions 88.79
46 TestCertExpiration 324.99
47 TestDockerFlags 81.39
48 TestForceSystemdFlag 110.77
49 TestForceSystemdEnv 90.11
56 TestErrorSpam/start 3.83
57 TestErrorSpam/status 2.75
58 TestErrorSpam/pause 3.31
59 TestErrorSpam/unpause 3.54
60 TestErrorSpam/stop 19.66
63 TestFunctional/serial/CopySyncFile 0.03
64 TestFunctional/serial/StartWithProxy 99.3
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 33.74
67 TestFunctional/serial/KubeContext 0.13
68 TestFunctional/serial/KubectlGetPods 0.3
71 TestFunctional/serial/CacheCmd/cache/add_remote 6.49
72 TestFunctional/serial/CacheCmd/cache/add_local 3.71
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.26
74 TestFunctional/serial/CacheCmd/cache/list 0.27
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.85
76 TestFunctional/serial/CacheCmd/cache/cache_reload 4.11
77 TestFunctional/serial/CacheCmd/cache/delete 0.52
78 TestFunctional/serial/MinikubeKubectlCmd 0.5
80 TestFunctional/serial/ExtraConfig 75.56
81 TestFunctional/serial/ComponentHealth 0.18
82 TestFunctional/serial/LogsCmd 2.41
83 TestFunctional/serial/LogsFileCmd 2.49
84 TestFunctional/serial/InvalidService 5.62
86 TestFunctional/parallel/ConfigCmd 1.74
88 TestFunctional/parallel/DryRun 2.32
89 TestFunctional/parallel/InternationalLanguage 0.93
90 TestFunctional/parallel/StatusCmd 2.79
95 TestFunctional/parallel/AddonsCmd 0.59
96 TestFunctional/parallel/PersistentVolumeClaim 55.64
98 TestFunctional/parallel/SSHCmd 1.69
99 TestFunctional/parallel/CpCmd 5.67
100 TestFunctional/parallel/MySQL 73.73
101 TestFunctional/parallel/FileSync 0.76
102 TestFunctional/parallel/CertSync 4.97
106 TestFunctional/parallel/NodeLabels 0.2
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.83
110 TestFunctional/parallel/License 3.29
111 TestFunctional/parallel/ServiceCmd/DeployApp 22.44
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.18
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 24.79
117 TestFunctional/parallel/ServiceCmd/List 1.19
118 TestFunctional/parallel/ServiceCmd/JSONOutput 1.24
119 TestFunctional/parallel/ServiceCmd/HTTPS 15.01
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.22
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.21
126 TestFunctional/parallel/Version/short 0.33
127 TestFunctional/parallel/Version/components 3.2
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.79
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.92
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.9
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.78
132 TestFunctional/parallel/ImageCommands/ImageBuild 9.08
133 TestFunctional/parallel/ImageCommands/Setup 1.91
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.28
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.31
136 TestFunctional/parallel/DockerEnv/powershell 7.71
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.09
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.08
139 TestFunctional/parallel/ServiceCmd/Format 15.02
140 TestFunctional/parallel/ImageCommands/ImageRemove 1.32
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.68
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.45
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.45
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.44
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.22
146 TestFunctional/parallel/ProfileCmd/profile_not_create 1.47
147 TestFunctional/parallel/ProfileCmd/profile_list 1.48
148 TestFunctional/parallel/ProfileCmd/profile_json_output 1.29
149 TestFunctional/parallel/ServiceCmd/URL 15.03
150 TestFunctional/delete_echo-server_images 0.21
151 TestFunctional/delete_my-image_image 0.09
152 TestFunctional/delete_minikube_cached_images 0.09
156 TestMultiControlPlane/serial/StartCluster 217.28
157 TestMultiControlPlane/serial/DeployApp 29.56
158 TestMultiControlPlane/serial/PingHostFromPods 3.87
159 TestMultiControlPlane/serial/AddWorkerNode 55.85
160 TestMultiControlPlane/serial/NodeLabels 0.21
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 3.06
162 TestMultiControlPlane/serial/CopyFile 48.08
163 TestMultiControlPlane/serial/StopSecondaryNode 14.12
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 2.26
165 TestMultiControlPlane/serial/RestartSecondaryNode 147.67
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 2.95
167 TestMultiControlPlane/serial/RestartClusterKeepsNodes 220.2
168 TestMultiControlPlane/serial/DeleteSecondaryNode 17.36
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 2.17
170 TestMultiControlPlane/serial/StopCluster 36.52
171 TestMultiControlPlane/serial/RestartCluster 118.56
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 2.36
173 TestMultiControlPlane/serial/AddSecondaryNode 80.25
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 2.83
177 TestImageBuild/serial/Setup 64.09
178 TestImageBuild/serial/NormalBuild 5.73
179 TestImageBuild/serial/BuildWithBuildArg 2.68
180 TestImageBuild/serial/BuildWithDockerIgnore 1.68
181 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.8
185 TestJSONOutput/start/Command 98.72
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 1.43
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 1.3
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 7.21
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.96
210 TestKicCustomNetwork/create_custom_network 73.41
211 TestKicCustomNetwork/use_default_bridge_network 70.28
212 TestKicExistingNetwork 69.67
213 TestKicCustomSubnet 70.37
214 TestKicStaticIP 68.92
215 TestMainNoArgs 0.24
216 TestMinikubeProfile 136.48
219 TestMountStart/serial/StartWithMountFirst 19.42
220 TestMountStart/serial/VerifyMountFirst 0.78
221 TestMountStart/serial/StartWithMountSecond 17.55
222 TestMountStart/serial/VerifyMountSecond 0.77
223 TestMountStart/serial/DeleteFirst 2.82
224 TestMountStart/serial/VerifyMountPostDelete 0.75
225 TestMountStart/serial/Stop 2.06
226 TestMountStart/serial/RestartStopped 12.4
227 TestMountStart/serial/VerifyMountPostStop 0.77
230 TestMultiNode/serial/FreshStart2Nodes 147.18
231 TestMultiNode/serial/DeployApp2Nodes 37.97
232 TestMultiNode/serial/PingHostFrom2Pods 2.56
233 TestMultiNode/serial/AddNode 48.81
234 TestMultiNode/serial/MultiNodeLabels 0.19
235 TestMultiNode/serial/ProfileList 1.94
236 TestMultiNode/serial/CopyFile 26.52
237 TestMultiNode/serial/StopNode 4.85
238 TestMultiNode/serial/StartAfterStop 18.3
239 TestMultiNode/serial/RestartKeepsNodes 116.17
240 TestMultiNode/serial/DeleteNode 9.97
241 TestMultiNode/serial/StopMultiNode 24.33
242 TestMultiNode/serial/RestartMultiNode 74.14
243 TestMultiNode/serial/ValidateNameConflict 67.36
247 TestPreload 172.58
248 TestScheduledStopWindows 133
252 TestInsufficientStorage 42.25
253 TestRunningBinaryUpgrade 188.72
255 TestKubernetesUpgrade 239.37
256 TestMissingContainerUpgrade 394.02
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.42
259 TestNoKubernetes/serial/StartWithK8s 102.64
260 TestNoKubernetes/serial/StartWithStopK8s 28.76
261 TestNoKubernetes/serial/Start 30.74
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.8
263 TestNoKubernetes/serial/ProfileList 4.67
264 TestNoKubernetes/serial/Stop 2.61
265 TestNoKubernetes/serial/StartNoArgs 20.2
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.89
267 TestStoppedBinaryUpgrade/Setup 0.83
268 TestStoppedBinaryUpgrade/Upgrade 263.48
277 TestPause/serial/Start 149.19
278 TestStoppedBinaryUpgrade/MinikubeLogs 3.43
279 TestPause/serial/SecondStartNoReconfiguration 37.16
291 TestPause/serial/Pause 1.68
292 TestPause/serial/VerifyStatus 0.99
293 TestPause/serial/Unpause 1.56
294 TestPause/serial/PauseAgain 1.75
295 TestPause/serial/DeletePaused 5.98
296 TestPause/serial/VerifyDeletedResources 2.31
298 TestStartStop/group/old-k8s-version/serial/FirstStart 232.37
300 TestStartStop/group/no-preload/serial/FirstStart 133.86
302 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 116.75
304 TestStartStop/group/newest-cni/serial/FirstStart 78.15
305 TestStartStop/group/no-preload/serial/DeployApp 11.02
306 TestStartStop/group/newest-cni/serial/DeployApp 0
307 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.21
308 TestStartStop/group/newest-cni/serial/Stop 13.02
309 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.01
310 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.34
311 TestStartStop/group/no-preload/serial/Stop 12.94
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.17
313 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.84
314 TestStartStop/group/newest-cni/serial/SecondStart 41.14
315 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.06
316 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.81
317 TestStartStop/group/no-preload/serial/SecondStart 329.48
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.99
319 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 297.22
320 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
321 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
322 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.98
323 TestStartStop/group/newest-cni/serial/Pause 12.5
325 TestStartStop/group/embed-certs/serial/FirstStart 95.88
326 TestStartStop/group/old-k8s-version/serial/DeployApp 18.89
327 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.67
328 TestStartStop/group/old-k8s-version/serial/Stop 13.01
329 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.96
331 TestStartStop/group/embed-certs/serial/DeployApp 12.07
332 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.82
333 TestStartStop/group/embed-certs/serial/Stop 12.94
334 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.88
335 TestStartStop/group/embed-certs/serial/SecondStart 294.63
336 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.02
337 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.47
338 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.67
339 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.39
340 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
341 TestNetworkPlugins/group/auto/Start 100.07
342 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.44
343 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.67
344 TestStartStop/group/no-preload/serial/Pause 7.41
345 TestNetworkPlugins/group/calico/Start 161.32
346 TestNetworkPlugins/group/auto/KubeletFlags 0.99
347 TestNetworkPlugins/group/auto/NetCatPod 21.8
348 TestNetworkPlugins/group/auto/DNS 0.43
349 TestNetworkPlugins/group/auto/Localhost 0.37
350 TestNetworkPlugins/group/auto/HairPin 0.4
351 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
352 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.46
353 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.76
354 TestStartStop/group/embed-certs/serial/Pause 9.16
355 TestNetworkPlugins/group/custom-flannel/Start 113.85
356 TestNetworkPlugins/group/false/Start 119.76
357 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.14
358 TestNetworkPlugins/group/calico/ControllerPod 6.01
359 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.9
360 TestNetworkPlugins/group/calico/KubeletFlags 0.77
361 TestNetworkPlugins/group/calico/NetCatPod 30.41
362 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.73
363 TestStartStop/group/old-k8s-version/serial/Pause 14.46
364 TestNetworkPlugins/group/kindnet/Start 114.17
365 TestNetworkPlugins/group/calico/DNS 0.38
366 TestNetworkPlugins/group/calico/Localhost 0.36
367 TestNetworkPlugins/group/calico/HairPin 0.34
368 TestNetworkPlugins/group/flannel/Start 104.11
369 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.88
370 TestNetworkPlugins/group/custom-flannel/NetCatPod 25.84
371 TestNetworkPlugins/group/false/KubeletFlags 1.78
372 TestNetworkPlugins/group/false/NetCatPod 25.81
373 TestNetworkPlugins/group/custom-flannel/DNS 0.4
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.35
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.38
376 TestNetworkPlugins/group/false/DNS 0.39
377 TestNetworkPlugins/group/false/Localhost 0.35
378 TestNetworkPlugins/group/false/HairPin 0.34
379 TestNetworkPlugins/group/kindnet/ControllerPod 6.02
380 TestNetworkPlugins/group/kindnet/KubeletFlags 0.9
381 TestNetworkPlugins/group/kindnet/NetCatPod 19.76
382 TestNetworkPlugins/group/kindnet/DNS 0.38
383 TestNetworkPlugins/group/kindnet/Localhost 0.69
384 TestNetworkPlugins/group/kindnet/HairPin 0.4
385 TestNetworkPlugins/group/enable-default-cni/Start 154.93
386 TestNetworkPlugins/group/bridge/Start 115.28
387 TestNetworkPlugins/group/flannel/ControllerPod 6.01
388 TestNetworkPlugins/group/flannel/KubeletFlags 0.82
389 TestNetworkPlugins/group/flannel/NetCatPod 27.52
390 TestNetworkPlugins/group/kubenet/Start 105.35
391 TestNetworkPlugins/group/flannel/DNS 0.42
392 TestNetworkPlugins/group/flannel/Localhost 0.33
393 TestNetworkPlugins/group/flannel/HairPin 0.32
394 TestNetworkPlugins/group/bridge/KubeletFlags 0.89
395 TestNetworkPlugins/group/bridge/NetCatPod 19.73
396 TestNetworkPlugins/group/bridge/DNS 0.34
397 TestNetworkPlugins/group/bridge/Localhost 0.29
398 TestNetworkPlugins/group/bridge/HairPin 0.31
399 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.83
400 TestNetworkPlugins/group/enable-default-cni/NetCatPod 20.58
401 TestNetworkPlugins/group/kubenet/KubeletFlags 0.86
402 TestNetworkPlugins/group/kubenet/NetCatPod 21.84
403 TestNetworkPlugins/group/enable-default-cni/DNS 0.38
404 TestNetworkPlugins/group/enable-default-cni/Localhost 0.33
405 TestNetworkPlugins/group/enable-default-cni/HairPin 0.32
406 TestNetworkPlugins/group/kubenet/DNS 0.34
407 TestNetworkPlugins/group/kubenet/Localhost 0.31
408 TestNetworkPlugins/group/kubenet/HairPin 0.34
x
+
TestDownloadOnly/v1.20.0/json-events (10.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-264900 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-264900 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker: (10.0440309s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0923 11:07:13.020272   13200 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0923 11:07:13.108430   13200 preload.go:146] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-264900
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-264900: exit status 85 (291.7477ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-264900 | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:07 UTC |          |
	|         | -p download-only-264900        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:07:03
	Running on machine: minikube2
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:07:03.081378    5208 out.go:345] Setting OutFile to fd 688 ...
	I0923 11:07:03.160646    5208 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:07:03.160646    5208 out.go:358] Setting ErrFile to fd 692...
	I0923 11:07:03.160646    5208 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 11:07:03.173955    5208 root.go:314] Error reading config file at C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0923 11:07:03.185126    5208 out.go:352] Setting JSON to true
	I0923 11:07:03.188596    5208 start.go:129] hostinfo: {"hostname":"minikube2","uptime":490,"bootTime":1727089132,"procs":185,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0923 11:07:03.188596    5208 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 11:07:03.194849    5208 out.go:97] [download-only-264900] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 11:07:03.195198    5208 notify.go:220] Checking for updates...
	W0923 11:07:03.195382    5208 preload.go:293] Failed to list preload files: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0923 11:07:03.197646    5208 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0923 11:07:03.200946    5208 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0923 11:07:03.204967    5208 out.go:169] MINIKUBE_LOCATION=19690
	I0923 11:07:03.207093    5208 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0923 11:07:03.211827    5208 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 11:07:03.212786    5208 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:07:03.402122    5208 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0923 11:07:03.410361    5208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:07:04.727365    5208 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.3168401s)
	I0923 11:07:04.728579    5208 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:66 SystemTime:2024-09-23 11:07:04.702327401 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 11:07:04.733624    5208 out.go:97] Using the docker driver based on user configuration
	I0923 11:07:04.733624    5208 start.go:297] selected driver: docker
	I0923 11:07:04.733624    5208 start.go:901] validating driver "docker" against <nil>
	I0923 11:07:04.748626    5208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:07:05.107688    5208 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:66 SystemTime:2024-09-23 11:07:05.076089927 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 11:07:05.108108    5208 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 11:07:05.227115    5208 start_flags.go:393] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0923 11:07:05.228479    5208 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 11:07:05.232117    5208 out.go:169] Using Docker Desktop driver with root privileges
	I0923 11:07:05.234894    5208 cni.go:84] Creating CNI manager for ""
	I0923 11:07:05.234894    5208 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 11:07:05.234894    5208 start.go:340] cluster config:
	{Name:download-only-264900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-264900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:07:05.237119    5208 out.go:97] Starting "download-only-264900" primary control-plane node in "download-only-264900" cluster
	I0923 11:07:05.237119    5208 cache.go:121] Beginning downloading kic base image for docker with docker
	I0923 11:07:05.239865    5208 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0923 11:07:05.239865    5208 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 11:07:05.239865    5208 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 11:07:05.290278    5208 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0923 11:07:05.290381    5208 cache.go:56] Caching tarball of preloaded images
	I0923 11:07:05.290447    5208 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 11:07:05.293477    5208 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0923 11:07:05.293477    5208 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0923 11:07:05.326581    5208 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 11:07:05.326581    5208 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726784731-19672@sha256_7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar
	I0923 11:07:05.326581    5208 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726784731-19672@sha256_7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar
	I0923 11:07:05.326581    5208 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 11:07:05.327803    5208 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 11:07:05.360862    5208 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-264900 host does not exist
	  To start a cluster, run: "minikube start -p download-only-264900"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1659808s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-264900
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-264900: (1.0732534s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (7.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-005800 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-005800 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker: (7.9626916s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (7.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0923 11:07:23.607779   13200 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 11:07:23.608138   13200 preload.go:146] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-005800
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-005800: exit status 85 (252.2932ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-264900 | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:07 UTC |                     |
	|         | -p download-only-264900        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=docker                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:07 UTC | 23 Sep 24 11:07 UTC |
	| delete  | -p download-only-264900        | download-only-264900 | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:07 UTC | 23 Sep 24 11:07 UTC |
	| start   | -o=json --download-only        | download-only-005800 | minikube2\jenkins | v1.34.0 | 23 Sep 24 11:07 UTC |                     |
	|         | -p download-only-005800        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=docker                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:07:15
	Running on machine: minikube2
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:07:15.759193    6700 out.go:345] Setting OutFile to fd 812 ...
	I0923 11:07:15.843701    6700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:07:15.843701    6700 out.go:358] Setting ErrFile to fd 756...
	I0923 11:07:15.843701    6700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:07:15.866548    6700 out.go:352] Setting JSON to true
	I0923 11:07:15.869563    6700 start.go:129] hostinfo: {"hostname":"minikube2","uptime":503,"bootTime":1727089132,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0923 11:07:15.869677    6700 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 11:07:15.874064    6700 out.go:97] [download-only-005800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 11:07:15.874064    6700 notify.go:220] Checking for updates...
	I0923 11:07:15.877632    6700 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0923 11:07:15.880072    6700 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0923 11:07:15.882915    6700 out.go:169] MINIKUBE_LOCATION=19690
	I0923 11:07:15.885217    6700 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0923 11:07:15.891117    6700 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 11:07:15.892040    6700 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:07:16.079989    6700 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0923 11:07:16.090182    6700 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:07:16.451682    6700 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:66 SystemTime:2024-09-23 11:07:16.421991012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 11:07:16.705658    6700 out.go:97] Using the docker driver based on user configuration
	I0923 11:07:16.705658    6700 start.go:297] selected driver: docker
	I0923 11:07:16.706214    6700 start.go:901] validating driver "docker" against <nil>
	I0923 11:07:16.722536    6700 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:07:17.068574    6700 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:66 SystemTime:2024-09-23 11:07:17.033376221 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 11:07:17.069450    6700 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 11:07:17.124265    6700 start_flags.go:393] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0923 11:07:17.125534    6700 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 11:07:17.560656    6700 out.go:169] Using Docker Desktop driver with root privileges
	I0923 11:07:17.569373    6700 cni.go:84] Creating CNI manager for ""
	I0923 11:07:17.569373    6700 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:07:17.569373    6700 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 11:07:17.569373    6700 start.go:340] cluster config:
	{Name:download-only-005800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-005800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:07:17.581006    6700 out.go:97] Starting "download-only-005800" primary control-plane node in "download-only-005800" cluster
	I0923 11:07:17.581376    6700 cache.go:121] Beginning downloading kic base image for docker with docker
	I0923 11:07:17.610435    6700 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0923 11:07:17.611411    6700 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 11:07:17.611698    6700 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 11:07:17.655011    6700 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 11:07:17.655079    6700 cache.go:56] Caching tarball of preloaded images
	I0923 11:07:17.655693    6700 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 11:07:17.711875    6700 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 11:07:17.711875    6700 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726784731-19672@sha256_7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar
	I0923 11:07:17.711875    6700 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726784731-19672@sha256_7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar
	I0923 11:07:17.711875    6700 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 11:07:17.711875    6700 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 11:07:17.711875    6700 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 11:07:17.712806    6700 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 11:07:17.859411    6700 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0923 11:07:17.859411    6700 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0923 11:07:17.922571    6700 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-005800 host does not exist
	  To start a cluster, run: "minikube start -p download-only-005800"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (1.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3983907s)
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (1.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-005800
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.87s)

                                                
                                    
x
+
TestDownloadOnlyKic (3.3s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-959000 --alsologtostderr --driver=docker
aaa_download_only_test.go:232: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-959000 --alsologtostderr --driver=docker: (1.662922s)
helpers_test.go:175: Cleaning up "download-docker-959000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-959000
--- PASS: TestDownloadOnlyKic (3.30s)

                                                
                                    
x
+
TestBinaryMirror (2.93s)

                                                
                                                
=== RUN   TestBinaryMirror
I0923 11:07:30.796350   13200 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/windows/amd64/kubectl.exe.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-833500 --alsologtostderr --binary-mirror http://127.0.0.1:53131 --driver=docker
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-833500 --alsologtostderr --binary-mirror http://127.0.0.1:53131 --driver=docker: (1.8714314s)
helpers_test.go:175: Cleaning up "binary-mirror-833500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-833500
--- PASS: TestBinaryMirror (2.93s)

                                                
                                    
x
+
TestOffline (120.29s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-603800 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-603800 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (1m55.2088947s)
helpers_test.go:175: Cleaning up "offline-docker-603800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-603800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-603800: (5.0781372s)
--- PASS: TestOffline (120.29s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.33s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-827700
addons_test.go:975: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-827700: exit status 85 (324.375ms)

                                                
                                                
-- stdout --
	* Profile "addons-827700" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-827700"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.33s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.33s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-827700
addons_test.go:986: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-827700: exit status 85 (325.8993ms)

                                                
                                                
-- stdout --
	* Profile "addons-827700" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-827700"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.33s)

                                                
                                    
x
+
TestAddons/Setup (524.55s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-827700 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-827700 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker --addons=ingress --addons=ingress-dns: (8m44.5514085s)
--- PASS: TestAddons/Setup (524.55s)

                                                
                                    
x
+
TestAddons/serial/Volcano (55.51s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:843: volcano-admission stabilized in 77.6753ms
addons_test.go:851: volcano-controller stabilized in 77.6753ms
addons_test.go:835: volcano-scheduler stabilized in 77.6753ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-p286d" [65450e01-2e0b-4057-af4d-6958b02349b6] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.009395s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-qxjn5" [24b0ca74-d596-46cc-b734-e3e7f8f6b0f7] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.0072803s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-nfzfz" [6cb8ac6d-34ba-47fc-850b-bc70ad3fec9c] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.0082558s
addons_test.go:870: (dbg) Run:  kubectl --context addons-827700 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-827700 create -f testdata\vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-827700 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [f6dcb0cf-d039-4576-bc93-8a20fd77258c] Pending
helpers_test.go:344: "test-job-nginx-0" [f6dcb0cf-d039-4576-bc93-8a20fd77258c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [f6dcb0cf-d039-4576-bc93-8a20fd77258c] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 27.0085092s
addons_test.go:906: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-827700 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-windows-amd64.exe -p addons-827700 addons disable volcano --alsologtostderr -v=1: (11.420902s)
--- PASS: TestAddons/serial/Volcano (55.51s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.36s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-827700 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-827700 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.36s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.04s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-bxc6k" [05cd8b93-217b-42e5-8d3d-200c16b23c03] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0093943s
addons_test.go:789: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-827700
addons_test.go:789: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-827700: (7.0258382s)
--- PASS: TestAddons/parallel/InspektorGadget (12.04s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (8.74s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 46.0032ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-pb9f5" [a8e12ecd-fbb1-43b6-ad32-62445b93b363] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0120311s
addons_test.go:413: (dbg) Run:  kubectl --context addons-827700 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-827700 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:430: (dbg) Done: out/minikube-windows-amd64.exe -p addons-827700 addons disable metrics-server --alsologtostderr -v=1: (2.4999022s)
--- PASS: TestAddons/parallel/MetricsServer (8.74s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.89s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0923 11:25:30.775151   13200 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0923 11:25:30.839139   13200 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 11:25:30.839139   13200 kapi.go:107] duration metric: took 63.9878ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 63.9878ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-827700 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-827700 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [fe44fb82-a1a6-4855-925b-cb69350358a3] Pending
helpers_test.go:344: "task-pv-pod" [fe44fb82-a1a6-4855-925b-cb69350358a3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [fe44fb82-a1a6-4855-925b-cb69350358a3] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.0717573s
addons_test.go:528: (dbg) Run:  kubectl --context addons-827700 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-827700 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-827700 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-827700 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-827700 delete pod task-pv-pod: (2.2275311s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-827700 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-827700 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-827700 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b30dd198-eda2-452a-b01c-2704179ae1f5] Pending
helpers_test.go:344: "task-pv-pod-restore" [b30dd198-eda2-452a-b01c-2704179ae1f5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b30dd198-eda2-452a-b01c-2704179ae1f5] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0086001s
addons_test.go:570: (dbg) Run:  kubectl --context addons-827700 delete pod task-pv-pod-restore
addons_test.go:570: (dbg) Done: kubectl --context addons-827700 delete pod task-pv-pod-restore: (1.926368s)
addons_test.go:574: (dbg) Run:  kubectl --context addons-827700 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-827700 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-827700 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-windows-amd64.exe -p addons-827700 addons disable csi-hostpath-driver --alsologtostderr -v=1: (8.1881911s)
addons_test.go:586: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-827700 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:586: (dbg) Done: out/minikube-windows-amd64.exe -p addons-827700 addons disable volumesnapshots --alsologtostderr -v=1: (1.6713977s)
--- PASS: TestAddons/parallel/CSI (61.89s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (30.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-827700 --alsologtostderr -v=1
addons_test.go:768: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-827700 --alsologtostderr -v=1: (1.7173782s)
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-skj46" [015aae6f-2357-4ded-96b6-246c43c1377c] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-skj46" [015aae6f-2357-4ded-96b6-246c43c1377c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-skj46" [015aae6f-2357-4ded-96b6-246c43c1377c] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 22.019714s
addons_test.go:777: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-827700 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-windows-amd64.exe -p addons-827700 addons disable headlamp --alsologtostderr -v=1: (7.1669809s)
--- PASS: TestAddons/parallel/Headlamp (30.91s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.38s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-b4fnw" [cc103529-cd4c-4a67-a354-402758beaf05] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0139139s
addons_test.go:808: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-827700
addons_test.go:808: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-827700: (1.3495758s)
--- PASS: TestAddons/parallel/CloudSpanner (7.38s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (64.89s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-827700 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-827700 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-827700 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c3e3ff40-83e5-4850-9de2-ecc592d403c3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c3e3ff40-83e5-4850-9de2-ecc592d403c3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c3e3ff40-83e5-4850-9de2-ecc592d403c3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.006082s
addons_test.go:938: (dbg) Run:  kubectl --context addons-827700 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-827700 ssh "cat /opt/local-path-provisioner/pvc-e0f6395d-66c4-4c42-9f5c-1478cb042762_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-827700 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-827700 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-827700 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-windows-amd64.exe -p addons-827700 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (45.3410703s)
--- PASS: TestAddons/parallel/LocalPath (64.89s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.88s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4p2zq" [b5c30d33-8dae-49de-a646-f149449da74f] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0075098s
addons_test.go:1002: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-827700
addons_test.go:1002: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-827700: (1.866381s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.88s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (14.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-7zcpj" [e8489b45-d3d8-4116-a47a-7edc0f075e9c] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0075098s
addons_test.go:1014: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-827700 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-windows-amd64.exe -p addons-827700 addons disable yakd --alsologtostderr -v=1: (8.0754606s)
--- PASS: TestAddons/parallel/Yakd (14.09s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.74s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-827700
addons_test.go:170: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-827700: (12.5132779s)
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-827700
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-827700
addons_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-827700
--- PASS: TestAddons/StoppedEnableDisable (13.74s)

                                                
                                    
x
+
TestCertOptions (88.79s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-952700 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-952700 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (1m21.0663575s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-952700 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
I0923 12:33:15.173736   13200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-952700
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-952700 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-952700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-952700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-952700: (5.843799s)
--- PASS: TestCertOptions (88.79s)

                                                
                                    
x
+
TestCertExpiration (324.99s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-913600 --memory=2048 --cert-expiration=3m --driver=docker
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-913600 --memory=2048 --cert-expiration=3m --driver=docker: (1m43.4189868s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-913600 --memory=2048 --cert-expiration=8760h --driver=docker
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-913600 --memory=2048 --cert-expiration=8760h --driver=docker: (37.1140135s)
helpers_test.go:175: Cleaning up "cert-expiration-913600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-913600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-913600: (4.4516002s)
--- PASS: TestCertExpiration (324.99s)

                                                
                                    
x
+
TestDockerFlags (81.39s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-891400 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-891400 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (1m14.040821s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-891400 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-891400 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-891400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-891400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-891400: (5.643622s)
--- PASS: TestDockerFlags (81.39s)

                                                
                                    
x
+
TestForceSystemdFlag (110.77s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-260000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-260000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (1m43.8369072s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-260000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-260000 ssh "docker info --format {{.CgroupDriver}}": (1.5407902s)
helpers_test.go:175: Cleaning up "force-systemd-flag-260000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-260000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-260000: (5.3923132s)
--- PASS: TestForceSystemdFlag (110.77s)

                                                
                                    
x
+
TestForceSystemdEnv (90.11s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-076200 --memory=2048 --alsologtostderr -v=5 --driver=docker
E0923 12:33:04.973605   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-076200 --memory=2048 --alsologtostderr -v=5 --driver=docker: (1m21.3553853s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-076200 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-076200 ssh "docker info --format {{.CgroupDriver}}": (1.0740106s)
helpers_test.go:175: Cleaning up "force-systemd-env-076200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-076200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-076200: (7.6741731s)
--- PASS: TestForceSystemdEnv (90.11s)

                                                
                                    
x
+
TestErrorSpam/start (3.83s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-111700 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-111700 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 start --dry-run: (1.199125s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-111700 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-111700 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 start --dry-run: (1.2885257s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-111700 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-111700 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 start --dry-run: (1.3351433s)
--- PASS: TestErrorSpam/start (3.83s)

                                                
                                    
x
+
TestErrorSpam/status (2.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-111700 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 status
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-111700 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 status
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-111700 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 status
--- PASS: TestErrorSpam/status (2.75s)

                                                
                                    
x
+
TestErrorSpam/pause (3.31s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-111700 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-111700 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 pause: (1.4522757s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-111700 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-111700 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 pause
--- PASS: TestErrorSpam/pause (3.31s)

                                                
                                    
x
+
TestErrorSpam/unpause (3.54s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-111700 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-111700 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 unpause: (1.2402501s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-111700 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-111700 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 unpause: (1.315486s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-111700 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 unpause
--- PASS: TestErrorSpam/unpause (3.54s)

                                                
                                    
x
+
TestErrorSpam/stop (19.66s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-111700 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-111700 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 stop: (12.1578653s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-111700 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-111700 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 stop: (4.2179385s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-111700 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-111700 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-111700 stop: (3.283249s)
--- PASS: TestErrorSpam/stop (19.66s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\test\nested\copy\13200\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (99.3s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-716900 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
functional_test.go:2234: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-716900 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (1m39.2921969s)
--- PASS: TestFunctional/serial/StartWithProxy (99.30s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.74s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0923 11:30:42.178831   13200 config.go:182] Loaded profile config "functional-716900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-716900 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-716900 --alsologtostderr -v=8: (33.732065s)
functional_test.go:663: soft start took 33.7345112s for "functional-716900" cluster.
I0923 11:31:15.912793   13200 config.go:182] Loaded profile config "functional-716900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (33.74s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.13s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-716900 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 cache add registry.k8s.io/pause:3.1
E0923 11:31:18.609237   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:31:18.616091   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:31:18.627871   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:31:18.649916   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-716900 cache add registry.k8s.io/pause:3.1: (2.3142159s)
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 cache add registry.k8s.io/pause:3.3
E0923 11:31:18.692551   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:31:18.774893   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:31:18.937121   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:31:19.259860   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:31:19.901901   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-716900 cache add registry.k8s.io/pause:3.3: (2.0883558s)
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 cache add registry.k8s.io/pause:latest
E0923 11:31:21.184437   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-716900 cache add registry.k8s.io/pause:latest: (2.0907778s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (3.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-716900 C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2183087159\001
E0923 11:31:23.747684   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:1077: (dbg) Done: docker build -t minikube-local-cache-test:functional-716900 C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2183087159\001: (1.6447913s)
functional_test.go:1089: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 cache add minikube-local-cache-test:functional-716900
functional_test.go:1089: (dbg) Done: out/minikube-windows-amd64.exe -p functional-716900 cache add minikube-local-cache-test:functional-716900: (1.6370386s)
functional_test.go:1094: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 cache delete minikube-local-cache-test:functional-716900
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-716900
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (3.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (4.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E0923 11:31:28.870251   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-716900 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (802.6831ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-windows-amd64.exe -p functional-716900 cache reload: (1.6319079s)
functional_test.go:1163: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (4.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.52s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 kubectl -- --context functional-716900 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.50s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (75.56s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-716900 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0923 11:31:39.113240   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:31:59.595773   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:32:40.558913   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-716900 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m15.5588347s)
functional_test.go:761: restart took 1m15.5596878s for "functional-716900" cluster.
I0923 11:32:53.808924   13200 config.go:182] Loaded profile config "functional-716900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (75.56s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-716900 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 logs
functional_test.go:1236: (dbg) Done: out/minikube-windows-amd64.exe -p functional-716900 logs: (2.4098098s)
--- PASS: TestFunctional/serial/LogsCmd (2.41s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3607827620\001\logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-windows-amd64.exe -p functional-716900 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3607827620\001\logs.txt: (2.4867955s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.62s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-716900 apply -f testdata\invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-716900
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-716900: exit status 115 (1.131664s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30186 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_service_6bd82f1fe87f7552f02cc11dc4370801e3dafecc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-716900 delete -f testdata\invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-716900 delete -f testdata\invalidsvc.yaml: (1.0308578s)
--- PASS: TestFunctional/serial/InvalidService (5.62s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-716900 config get cpus: exit status 14 (272.9864ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-716900 config get cpus: exit status 14 (266.2274ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-716900 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-716900 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (939.7127ms)

                                                
                                                
-- stdout --
	* [functional-716900] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 11:33:49.450243    2996 out.go:345] Setting OutFile to fd 1296 ...
	I0923 11:33:49.531661    2996 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:33:49.531661    2996 out.go:358] Setting ErrFile to fd 1300...
	I0923 11:33:49.531661    2996 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:33:49.560160    2996 out.go:352] Setting JSON to false
	I0923 11:33:49.564145    2996 start.go:129] hostinfo: {"hostname":"minikube2","uptime":2097,"bootTime":1727089132,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0923 11:33:49.564145    2996 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 11:33:49.567106    2996 out.go:177] * [functional-716900] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 11:33:49.570100    2996 notify.go:220] Checking for updates...
	I0923 11:33:49.573144    2996 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0923 11:33:49.575108    2996 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:33:49.579152    2996 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0923 11:33:49.582119    2996 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 11:33:49.584112    2996 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:33:49.587120    2996 config.go:182] Loaded profile config "functional-716900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:33:49.589121    2996 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:33:49.784845    2996 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0923 11:33:49.792852    2996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:33:50.153233    2996 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:82 SystemTime:2024-09-23 11:33:50.12267089 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaV
ersion:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://
github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 11:33:50.158235    2996 out.go:177] * Using the docker driver based on existing profile
	I0923 11:33:50.161250    2996 start.go:297] selected driver: docker
	I0923 11:33:50.161250    2996 start.go:901] validating driver "docker" against &{Name:functional-716900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-716900 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:33:50.161250    2996 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:33:50.233387    2996 out.go:201] 
	W0923 11:33:50.236018    2996 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0923 11:33:50.238706    2996 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-716900 --dry-run --alsologtostderr -v=1 --driver=docker
functional_test.go:991: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-716900 --dry-run --alsologtostderr -v=1 --driver=docker: (1.3832771s)
--- PASS: TestFunctional/parallel/DryRun (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-716900 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-716900 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (926.5647ms)

                                                
                                                
-- stdout --
	* [functional-716900] minikube v1.34.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 11:33:51.779768   12696 out.go:345] Setting OutFile to fd 1396 ...
	I0923 11:33:51.869296   12696 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:33:51.869296   12696 out.go:358] Setting ErrFile to fd 1048...
	I0923 11:33:51.869296   12696 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:33:51.891276   12696 out.go:352] Setting JSON to false
	I0923 11:33:51.895570   12696 start.go:129] hostinfo: {"hostname":"minikube2","uptime":2099,"bootTime":1727089132,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0923 11:33:51.895570   12696 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 11:33:51.899720   12696 out.go:177] * [functional-716900] minikube v1.34.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 11:33:51.902924   12696 notify.go:220] Checking for updates...
	I0923 11:33:51.905039   12696 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0923 11:33:51.907486   12696 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:33:51.910470   12696 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0923 11:33:51.912537   12696 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 11:33:51.915474   12696 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:33:51.918471   12696 config.go:182] Loaded profile config "functional-716900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:33:51.920457   12696 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:33:52.126491   12696 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0923 11:33:52.134462   12696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:33:52.473984   12696 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:82 SystemTime:2024-09-23 11:33:52.430713932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 11:33:52.479022   12696 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0923 11:33:52.480990   12696 start.go:297] selected driver: docker
	I0923 11:33:52.480990   12696 start.go:901] validating driver "docker" against &{Name:functional-716900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-716900 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:33:52.481989   12696 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:33:52.542883   12696 out.go:201] 
	W0923 11:33:52.545839   12696 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 11:33:52.548800   12696 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 status
functional_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (2.79s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (55.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [acd58970-823c-4772-9860-2c7fa16de877] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0495706s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-716900 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-716900 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-716900 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-716900 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dba1fa50-0dfb-49bc-a43e-16ba62638107] Pending
helpers_test.go:344: "sp-pod" [dba1fa50-0dfb-49bc-a43e-16ba62638107] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [dba1fa50-0dfb-49bc-a43e-16ba62638107] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 38.0078675s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-716900 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-716900 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-716900 delete -f testdata/storage-provisioner/pod.yaml: (1.9246329s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-716900 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [29af503b-c0f7-47f6-b808-0794adb4c88c] Pending
helpers_test.go:344: "sp-pod" [29af503b-c0f7-47f6-b808-0794adb4c88c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [29af503b-c0f7-47f6-b808-0794adb4c88c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.0095106s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-716900 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (55.64s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (5.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 ssh -n functional-716900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 cp functional-716900:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalparallelCpCmd3991516023\001\cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 ssh -n functional-716900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-716900 ssh -n functional-716900 "sudo cat /home/docker/cp-test.txt": (1.0320524s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-716900 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (1.0369225s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 ssh -n functional-716900 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-716900 ssh -n functional-716900 "sudo cat /tmp/does/not/exist/cp-test.txt": (1.0472126s)
--- PASS: TestFunctional/parallel/CpCmd (5.67s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (73.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-716900 replace --force -f testdata\mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-7dk8x" [be1130ae-5307-4e1d-8cf7-91668d7d5d38] Pending
helpers_test.go:344: "mysql-6cdb49bbb-7dk8x" [be1130ae-5307-4e1d-8cf7-91668d7d5d38] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-7dk8x" [be1130ae-5307-4e1d-8cf7-91668d7d5d38] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m2.0084226s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-716900 exec mysql-6cdb49bbb-7dk8x -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-716900 exec mysql-6cdb49bbb-7dk8x -- mysql -ppassword -e "show databases;": exit status 1 (302.4527ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 11:34:56.219434   13200 retry.go:31] will retry after 532.862639ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-716900 exec mysql-6cdb49bbb-7dk8x -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-716900 exec mysql-6cdb49bbb-7dk8x -- mysql -ppassword -e "show databases;": exit status 1 (293.6151ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 11:34:57.056396   13200 retry.go:31] will retry after 1.019370014s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-716900 exec mysql-6cdb49bbb-7dk8x -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-716900 exec mysql-6cdb49bbb-7dk8x -- mysql -ppassword -e "show databases;": exit status 1 (300.1707ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 11:34:58.387937   13200 retry.go:31] will retry after 1.644282421s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-716900 exec mysql-6cdb49bbb-7dk8x -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-716900 exec mysql-6cdb49bbb-7dk8x -- mysql -ppassword -e "show databases;": exit status 1 (346.7719ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 11:35:00.388432   13200 retry.go:31] will retry after 2.367689308s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-716900 exec mysql-6cdb49bbb-7dk8x -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-716900 exec mysql-6cdb49bbb-7dk8x -- mysql -ppassword -e "show databases;": exit status 1 (343.712ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 11:35:03.109594   13200 retry.go:31] will retry after 3.669523105s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-716900 exec mysql-6cdb49bbb-7dk8x -- mysql -ppassword -e "show databases;"
E0923 11:36:18.610516   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:36:46.324631   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestFunctional/parallel/MySQL (73.73s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/13200/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 ssh "sudo cat /etc/test/nested/copy/13200/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (4.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/13200.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 ssh "sudo cat /etc/ssl/certs/13200.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/13200.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 ssh "sudo cat /usr/share/ca-certificates/13200.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/132002.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 ssh "sudo cat /etc/ssl/certs/132002.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/132002.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 ssh "sudo cat /usr/share/ca-certificates/132002.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (4.97s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-716900 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-716900 ssh "sudo systemctl is-active crio": exit status 1 (828.2168ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2288: (dbg) Done: out/minikube-windows-amd64.exe license: (3.2743045s)
--- PASS: TestFunctional/parallel/License (3.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (22.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-716900 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-716900 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-fxbf8" [93e8f2e7-88cf-496d-a2ba-ac49c29e7d2b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-fxbf8" [93e8f2e7-88cf-496d-a2ba-ac49c29e7d2b] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 22.0068797s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (22.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-716900 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-716900 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-716900 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-716900 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 7396: OpenProcess: The parameter is incorrect.
helpers_test.go:502: unable to terminate pid 9780: The parameter is incorrect.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-716900 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (24.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-716900 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b7c05c52-f504-4e93-aba5-867fc88c0206] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b7c05c52-f504-4e93-aba5-867fc88c0206] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 24.0091857s
I0923 11:33:32.182566   13200 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (24.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 service list
functional_test.go:1459: (dbg) Done: out/minikube-windows-amd64.exe -p functional-716900 service list: (1.1910119s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-windows-amd64.exe -p functional-716900 service list -o json: (1.2364927s)
functional_test.go:1494: Took "1.2364927s" to run "out/minikube-windows-amd64.exe -p functional-716900 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-716900 service --namespace=default --https --url hello-node: exit status 1 (15.0116829s)

                                                
                                                
-- stdout --
	https://127.0.0.1:54676

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1522: found endpoint: https://127.0.0.1:54676
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-716900 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-716900 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 6092: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 7020: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 version --short
--- PASS: TestFunctional/parallel/Version/short (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (3.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 version -o=json --components
E0923 11:34:02.481639   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:2270: (dbg) Done: out/minikube-windows-amd64.exe -p functional-716900 version -o=json --components: (3.1958776s)
--- PASS: TestFunctional/parallel/Version/components (3.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-716900 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-716900
docker.io/kicbase/echo-server:functional-716900
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-716900 image ls --format short --alsologtostderr:
I0923 11:34:06.886790    1124 out.go:345] Setting OutFile to fd 1448 ...
I0923 11:34:06.967877    1124 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 11:34:06.967877    1124 out.go:358] Setting ErrFile to fd 1452...
I0923 11:34:06.967877    1124 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 11:34:06.987520    1124 config.go:182] Loaded profile config "functional-716900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 11:34:06.988207    1124 config.go:182] Loaded profile config "functional-716900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 11:34:07.009655    1124 cli_runner.go:164] Run: docker container inspect functional-716900 --format={{.State.Status}}
I0923 11:34:07.108450    1124 ssh_runner.go:195] Run: systemctl --version
I0923 11:34:07.115447    1124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-716900
I0923 11:34:07.194335    1124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54336 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-716900\id_rsa Username:docker}
I0923 11:34:07.397665    1124 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-716900 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| docker.io/kicbase/echo-server               | functional-716900 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/minikube-local-cache-test | functional-716900 | e2efda0966d6d | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-716900 image ls --format table --alsologtostderr:
I0923 11:34:15.485613    6584 out.go:345] Setting OutFile to fd 1316 ...
I0923 11:34:15.574677    6584 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 11:34:15.574677    6584 out.go:358] Setting ErrFile to fd 700...
I0923 11:34:15.574677    6584 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 11:34:15.595414    6584 config.go:182] Loaded profile config "functional-716900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 11:34:15.595951    6584 config.go:182] Loaded profile config "functional-716900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 11:34:15.615157    6584 cli_runner.go:164] Run: docker container inspect functional-716900 --format={{.State.Status}}
I0923 11:34:15.719619    6584 ssh_runner.go:195] Run: systemctl --version
I0923 11:34:15.727621    6584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-716900
I0923 11:34:15.804625    6584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54336 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-716900\id_rsa Username:docker}
I0923 11:34:16.002406    6584 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-716900 image ls --format json --alsologtostderr:
[{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","r
epoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"e2efda0966d6deb432b4791d30d8e709491b0429c8ef04e80270cb6174f622c7","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-716900"],"size":"30"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/ku
be-scheduler:v1.31.1"],"size":"67400000"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-716900"],"size":"4940000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-716900 image ls --format json --alsologtostderr:
I0923 11:34:14.567234   10568 out.go:345] Setting OutFile to fd 1076 ...
I0923 11:34:14.657245   10568 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 11:34:14.657245   10568 out.go:358] Setting ErrFile to fd 1328...
I0923 11:34:14.657245   10568 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 11:34:14.681940   10568 config.go:182] Loaded profile config "functional-716900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 11:34:14.681940   10568 config.go:182] Loaded profile config "functional-716900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 11:34:14.704970   10568 cli_runner.go:164] Run: docker container inspect functional-716900 --format={{.State.Status}}
I0923 11:34:14.808257   10568 ssh_runner.go:195] Run: systemctl --version
I0923 11:34:14.816660   10568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-716900
I0923 11:34:14.904020   10568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54336 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-716900\id_rsa Username:docker}
I0923 11:34:15.045183   10568 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-716900 image ls --format yaml --alsologtostderr:
- id: e2efda0966d6deb432b4791d30d8e709491b0429c8ef04e80270cb6174f622c7
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-716900
size: "30"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-716900
size: "4940000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-716900 image ls --format yaml --alsologtostderr:
I0923 11:34:07.678112    8484 out.go:345] Setting OutFile to fd 760 ...
I0923 11:34:07.761108    8484 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 11:34:07.761108    8484 out.go:358] Setting ErrFile to fd 1004...
I0923 11:34:07.761108    8484 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 11:34:07.777559    8484 config.go:182] Loaded profile config "functional-716900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 11:34:07.778668    8484 config.go:182] Loaded profile config "functional-716900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 11:34:07.799274    8484 cli_runner.go:164] Run: docker container inspect functional-716900 --format={{.State.Status}}
I0923 11:34:07.901429    8484 ssh_runner.go:195] Run: systemctl --version
I0923 11:34:07.913419    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-716900
I0923 11:34:07.994111    8484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54336 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-716900\id_rsa Username:docker}
I0923 11:34:08.156863    8484 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (9.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-716900 ssh pgrep buildkitd: exit status 1 (738.6392ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 image build -t localhost/my-image:functional-716900 testdata\build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-windows-amd64.exe -p functional-716900 image build -t localhost/my-image:functional-716900 testdata\build --alsologtostderr: (7.6258301s)
functional_test.go:323: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-716900 image build -t localhost/my-image:functional-716900 testdata\build --alsologtostderr:
I0923 11:34:09.189894    5492 out.go:345] Setting OutFile to fd 1324 ...
I0923 11:34:09.287589    5492 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 11:34:09.287589    5492 out.go:358] Setting ErrFile to fd 1148...
I0923 11:34:09.287589    5492 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 11:34:09.306567    5492 config.go:182] Loaded profile config "functional-716900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 11:34:09.325549    5492 config.go:182] Loaded profile config "functional-716900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 11:34:09.341531    5492 cli_runner.go:164] Run: docker container inspect functional-716900 --format={{.State.Status}}
I0923 11:34:09.426553    5492 ssh_runner.go:195] Run: systemctl --version
I0923 11:34:09.435575    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-716900
I0923 11:34:09.516519    5492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54336 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-716900\id_rsa Username:docker}
I0923 11:34:09.633909    5492 build_images.go:161] Building image from path: C:\Users\jenkins.minikube2\AppData\Local\Temp\build.3129503764.tar
I0923 11:34:09.648948    5492 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0923 11:34:09.689306    5492 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3129503764.tar
I0923 11:34:09.698176    5492 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3129503764.tar: stat -c "%s %y" /var/lib/minikube/build/build.3129503764.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3129503764.tar': No such file or directory
I0923 11:34:09.698176    5492 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\AppData\Local\Temp\build.3129503764.tar --> /var/lib/minikube/build/build.3129503764.tar (3072 bytes)
I0923 11:34:09.758330    5492 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3129503764
I0923 11:34:09.804902    5492 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3129503764 -xf /var/lib/minikube/build/build.3129503764.tar
I0923 11:34:09.825968    5492 docker.go:360] Building image: /var/lib/minikube/build/build.3129503764
I0923 11:34:09.835912    5492 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-716900 /var/lib/minikube/build/build.3129503764
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 29B
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 3.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.3s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 writing image sha256:dd433fb304897dc1550249da0108827ccf309c4e2c624f634b4b1a72b16b60d3
#8 writing image sha256:dd433fb304897dc1550249da0108827ccf309c4e2c624f634b4b1a72b16b60d3 0.0s done
#8 naming to localhost/my-image:functional-716900 0.0s done
#8 DONE 0.3s
I0923 11:34:16.577954    5492 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-716900 /var/lib/minikube/build/build.3129503764: (6.7420144s)
I0923 11:34:16.595962    5492 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3129503764
I0923 11:34:16.644954    5492 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3129503764.tar
I0923 11:34:16.664856    5492 build_images.go:217] Built localhost/my-image:functional-716900 from C:\Users\jenkins.minikube2\AppData\Local\Temp\build.3129503764.tar
I0923 11:34:16.664856    5492 build_images.go:133] succeeded building to: functional-716900
I0923 11:34:16.664856    5492 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (9.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.7959701s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-716900
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 image load --daemon kicbase/echo-server:functional-716900 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-windows-amd64.exe -p functional-716900 image load --daemon kicbase/echo-server:functional-716900 --alsologtostderr: (2.5690851s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 image load --daemon kicbase/echo-server:functional-716900 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p functional-716900 image load --daemon kicbase/echo-server:functional-716900 --alsologtostderr: (1.6065489s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (7.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:499: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-716900 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-716900"
functional_test.go:499: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-716900 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-716900": (4.5407677s)
functional_test.go:522: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-716900 docker-env | Invoke-Expression ; docker images"
functional_test.go:522: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-716900 docker-env | Invoke-Expression ; docker images": (3.1485476s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (7.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-716900
functional_test.go:245: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 image load --daemon kicbase/echo-server:functional-716900 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-windows-amd64.exe -p functional-716900 image load --daemon kicbase/echo-server:functional-716900 --alsologtostderr: (1.5518581s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 image save kicbase/echo-server:functional-716900 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-716900 image save kicbase/echo-server:functional-716900 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr: (1.0774483s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-716900 service hello-node --url --format={{.IP}}: exit status 1 (15.0170775s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 image rm kicbase/echo-server:functional-716900 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-windows-amd64.exe -p functional-716900 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr: (1.0359864s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-716900
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 image save --daemon kicbase/echo-server:functional-716900 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-windows-amd64.exe -p functional-716900 image save --daemon kicbase/echo-server:functional-716900 --alsologtostderr: (1.0456335s)
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-716900
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1275: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.1287093s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1310: (dbg) Done: out/minikube-windows-amd64.exe profile list: (1.2096844s)
functional_test.go:1315: Took "1.2096844s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1329: Took "273.9714ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1361: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (1.0449259s)
functional_test.go:1366: Took "1.0449259s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1379: Took "246.472ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-716900 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-716900 service hello-node --url: exit status 1 (15.0281172s)

                                                
                                                
-- stdout --
	http://127.0.0.1:54795

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1565: found endpoint for hello-node: http://127.0.0.1:54795
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.21s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-716900
--- PASS: TestFunctional/delete_echo-server_images (0.21s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-716900
--- PASS: TestFunctional/delete_my-image_image (0.09s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-716900
--- PASS: TestFunctional/delete_minikube_cached_images (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (217.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-667600 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker
E0923 11:41:18.611411   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-667600 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker: (3m34.8585991s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-667600 status -v=7 --alsologtostderr: (2.4215397s)
--- PASS: TestMultiControlPlane/serial/StartCluster (217.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (29.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-667600 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-667600 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-667600 -- rollout status deployment/busybox: (19.0834398s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-667600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-667600 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-667600 -- exec busybox-7dff88458-7t6dz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-667600 -- exec busybox-7dff88458-7t6dz -- nslookup kubernetes.io: (2.092286s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-667600 -- exec busybox-7dff88458-npv56 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-667600 -- exec busybox-7dff88458-npv56 -- nslookup kubernetes.io: (1.5856469s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-667600 -- exec busybox-7dff88458-zmhw5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-667600 -- exec busybox-7dff88458-zmhw5 -- nslookup kubernetes.io: (1.5790857s)
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-667600 -- exec busybox-7dff88458-7t6dz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-667600 -- exec busybox-7dff88458-npv56 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-667600 -- exec busybox-7dff88458-zmhw5 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-667600 -- exec busybox-7dff88458-7t6dz -- nslookup kubernetes.default.svc.cluster.local
E0923 11:43:04.953524   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:43:04.961547   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:43:04.974531   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:43:04.997586   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:43:05.040533   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:43:05.123066   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:43:05.284900   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-667600 -- exec busybox-7dff88458-npv56 -- nslookup kubernetes.default.svc.cluster.local
E0923 11:43:05.608723   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-667600 -- exec busybox-7dff88458-zmhw5 -- nslookup kubernetes.default.svc.cluster.local
E0923 11:43:06.250615   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/DeployApp (29.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (3.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-667600 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-667600 -- exec busybox-7dff88458-7t6dz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0923 11:43:07.533176   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-667600 -- exec busybox-7dff88458-7t6dz -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-667600 -- exec busybox-7dff88458-npv56 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-667600 -- exec busybox-7dff88458-npv56 -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-667600 -- exec busybox-7dff88458-zmhw5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-667600 -- exec busybox-7dff88458-zmhw5 -- sh -c "ping -c 1 192.168.65.254"
E0923 11:43:10.095407   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (3.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-667600 -v=7 --alsologtostderr
E0923 11:43:15.218199   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:43:25.460583   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:43:45.943024   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-667600 -v=7 --alsologtostderr: (52.8307122s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-667600 status -v=7 --alsologtostderr: (3.01672s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-667600 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (3.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.0607786s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (3.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (48.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-667600 status --output json -v=7 --alsologtostderr: (2.8160689s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 cp testdata\cp-test.txt ha-667600:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1826528927\001\cp-test_ha-667600.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600:/home/docker/cp-test.txt ha-667600-m02:/home/docker/cp-test_ha-667600_ha-667600-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600:/home/docker/cp-test.txt ha-667600-m02:/home/docker/cp-test_ha-667600_ha-667600-m02.txt: (1.1470825s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600-m02 "sudo cat /home/docker/cp-test_ha-667600_ha-667600-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600:/home/docker/cp-test.txt ha-667600-m03:/home/docker/cp-test_ha-667600_ha-667600-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600:/home/docker/cp-test.txt ha-667600-m03:/home/docker/cp-test_ha-667600_ha-667600-m03.txt: (1.1556247s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600-m03 "sudo cat /home/docker/cp-test_ha-667600_ha-667600-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600:/home/docker/cp-test.txt ha-667600-m04:/home/docker/cp-test_ha-667600_ha-667600-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600:/home/docker/cp-test.txt ha-667600-m04:/home/docker/cp-test_ha-667600_ha-667600-m04.txt: (1.1570681s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600-m04 "sudo cat /home/docker/cp-test_ha-667600_ha-667600-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 cp testdata\cp-test.txt ha-667600-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1826528927\001\cp-test_ha-667600-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600-m02 "sudo cat /home/docker/cp-test.txt"
E0923 11:44:26.906328   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600-m02:/home/docker/cp-test.txt ha-667600:/home/docker/cp-test_ha-667600-m02_ha-667600.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600-m02:/home/docker/cp-test.txt ha-667600:/home/docker/cp-test_ha-667600-m02_ha-667600.txt: (1.1735488s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600 "sudo cat /home/docker/cp-test_ha-667600-m02_ha-667600.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600-m02:/home/docker/cp-test.txt ha-667600-m03:/home/docker/cp-test_ha-667600-m02_ha-667600-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600-m02:/home/docker/cp-test.txt ha-667600-m03:/home/docker/cp-test_ha-667600-m02_ha-667600-m03.txt: (1.1244647s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600-m03 "sudo cat /home/docker/cp-test_ha-667600-m02_ha-667600-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600-m02:/home/docker/cp-test.txt ha-667600-m04:/home/docker/cp-test_ha-667600-m02_ha-667600-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600-m02:/home/docker/cp-test.txt ha-667600-m04:/home/docker/cp-test_ha-667600-m02_ha-667600-m04.txt: (1.1725404s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600-m04 "sudo cat /home/docker/cp-test_ha-667600-m02_ha-667600-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 cp testdata\cp-test.txt ha-667600-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1826528927\001\cp-test_ha-667600-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600-m03:/home/docker/cp-test.txt ha-667600:/home/docker/cp-test_ha-667600-m03_ha-667600.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600-m03:/home/docker/cp-test.txt ha-667600:/home/docker/cp-test_ha-667600-m03_ha-667600.txt: (1.1717573s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600 "sudo cat /home/docker/cp-test_ha-667600-m03_ha-667600.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600-m03:/home/docker/cp-test.txt ha-667600-m02:/home/docker/cp-test_ha-667600-m03_ha-667600-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600-m03:/home/docker/cp-test.txt ha-667600-m02:/home/docker/cp-test_ha-667600-m03_ha-667600-m02.txt: (1.1676875s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600-m02 "sudo cat /home/docker/cp-test_ha-667600-m03_ha-667600-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600-m03:/home/docker/cp-test.txt ha-667600-m04:/home/docker/cp-test_ha-667600-m03_ha-667600-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600-m03:/home/docker/cp-test.txt ha-667600-m04:/home/docker/cp-test_ha-667600-m03_ha-667600-m04.txt: (1.1647068s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600-m04 "sudo cat /home/docker/cp-test_ha-667600-m03_ha-667600-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 cp testdata\cp-test.txt ha-667600-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1826528927\001\cp-test_ha-667600-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600-m04:/home/docker/cp-test.txt ha-667600:/home/docker/cp-test_ha-667600-m04_ha-667600.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600-m04:/home/docker/cp-test.txt ha-667600:/home/docker/cp-test_ha-667600-m04_ha-667600.txt: (1.1258439s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600 "sudo cat /home/docker/cp-test_ha-667600-m04_ha-667600.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600-m04:/home/docker/cp-test.txt ha-667600-m02:/home/docker/cp-test_ha-667600-m04_ha-667600-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600-m04:/home/docker/cp-test.txt ha-667600-m02:/home/docker/cp-test_ha-667600-m04_ha-667600-m02.txt: (1.1439522s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600-m02 "sudo cat /home/docker/cp-test_ha-667600-m04_ha-667600-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600-m04:/home/docker/cp-test.txt ha-667600-m03:/home/docker/cp-test_ha-667600-m04_ha-667600-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-667600 cp ha-667600-m04:/home/docker/cp-test.txt ha-667600-m03:/home/docker/cp-test_ha-667600-m04_ha-667600-m03.txt: (1.1935815s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 ssh -n ha-667600-m03 "sudo cat /home/docker/cp-test_ha-667600-m04_ha-667600-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (48.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-667600 node stop m02 -v=7 --alsologtostderr: (11.9294187s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-667600 status -v=7 --alsologtostderr: exit status 7 (2.1918187s)

                                                
                                                
-- stdout --
	ha-667600
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-667600-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-667600-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-667600-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 11:45:09.693781    8056 out.go:345] Setting OutFile to fd 828 ...
	I0923 11:45:09.771460    8056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:45:09.771531    8056 out.go:358] Setting ErrFile to fd 760...
	I0923 11:45:09.771531    8056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:45:09.785968    8056 out.go:352] Setting JSON to false
	I0923 11:45:09.785968    8056 mustload.go:65] Loading cluster: ha-667600
	I0923 11:45:09.785968    8056 notify.go:220] Checking for updates...
	I0923 11:45:09.786800    8056 config.go:182] Loaded profile config "ha-667600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:45:09.787335    8056 status.go:174] checking status of ha-667600 ...
	I0923 11:45:09.805470    8056 cli_runner.go:164] Run: docker container inspect ha-667600 --format={{.State.Status}}
	I0923 11:45:09.885052    8056 status.go:364] ha-667600 host status = "Running" (err=<nil>)
	I0923 11:45:09.885052    8056 host.go:66] Checking if "ha-667600" exists ...
	I0923 11:45:09.893050    8056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-667600
	I0923 11:45:09.970075    8056 host.go:66] Checking if "ha-667600" exists ...
	I0923 11:45:09.984099    8056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 11:45:09.991067    8056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-667600
	I0923 11:45:10.067376    8056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54950 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\ha-667600\id_rsa Username:docker}
	I0923 11:45:10.215168    8056 ssh_runner.go:195] Run: systemctl --version
	I0923 11:45:10.241036    8056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 11:45:10.284617    8056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-667600
	I0923 11:45:10.358609    8056 kubeconfig.go:125] found "ha-667600" server: "https://127.0.0.1:54949"
	I0923 11:45:10.358609    8056 api_server.go:166] Checking apiserver status ...
	I0923 11:45:10.371625    8056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:45:10.406575    8056 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2463/cgroup
	I0923 11:45:10.430135    8056 api_server.go:182] apiserver freezer: "7:freezer:/docker/7193af522f0dff800ea511578279a4630c973a6877dc87bfc9ab8431c0b19569/kubepods/burstable/pod226cc5f4d0e027d3790ea17d0adfde74/817d2b24646d92ce5bec4267173274755ce88e3a76d8c11074b96bf82b4e3822"
	I0923 11:45:10.445527    8056 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7193af522f0dff800ea511578279a4630c973a6877dc87bfc9ab8431c0b19569/kubepods/burstable/pod226cc5f4d0e027d3790ea17d0adfde74/817d2b24646d92ce5bec4267173274755ce88e3a76d8c11074b96bf82b4e3822/freezer.state
	I0923 11:45:10.467457    8056 api_server.go:204] freezer state: "THAWED"
	I0923 11:45:10.467566    8056 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54949/healthz ...
	I0923 11:45:10.485485    8056 api_server.go:279] https://127.0.0.1:54949/healthz returned 200:
	ok
	I0923 11:45:10.485559    8056 status.go:456] ha-667600 apiserver status = Running (err=<nil>)
	I0923 11:45:10.485559    8056 status.go:176] ha-667600 status: &{Name:ha-667600 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 11:45:10.485559    8056 status.go:174] checking status of ha-667600-m02 ...
	I0923 11:45:10.502617    8056 cli_runner.go:164] Run: docker container inspect ha-667600-m02 --format={{.State.Status}}
	I0923 11:45:10.582554    8056 status.go:364] ha-667600-m02 host status = "Stopped" (err=<nil>)
	I0923 11:45:10.582554    8056 status.go:377] host is not running, skipping remaining checks
	I0923 11:45:10.582554    8056 status.go:176] ha-667600-m02 status: &{Name:ha-667600-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 11:45:10.582554    8056 status.go:174] checking status of ha-667600-m03 ...
	I0923 11:45:10.599563    8056 cli_runner.go:164] Run: docker container inspect ha-667600-m03 --format={{.State.Status}}
	I0923 11:45:10.673537    8056 status.go:364] ha-667600-m03 host status = "Running" (err=<nil>)
	I0923 11:45:10.673537    8056 host.go:66] Checking if "ha-667600-m03" exists ...
	I0923 11:45:10.683819    8056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-667600-m03
	I0923 11:45:10.761480    8056 host.go:66] Checking if "ha-667600-m03" exists ...
	I0923 11:45:10.775199    8056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 11:45:10.783059    8056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-667600-m03
	I0923 11:45:10.865993    8056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55102 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\ha-667600-m03\id_rsa Username:docker}
	I0923 11:45:11.019343    8056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 11:45:11.060532    8056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-667600
	I0923 11:45:11.139116    8056 kubeconfig.go:125] found "ha-667600" server: "https://127.0.0.1:54949"
	I0923 11:45:11.139116    8056 api_server.go:166] Checking apiserver status ...
	I0923 11:45:11.151564    8056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:45:11.199281    8056 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2383/cgroup
	I0923 11:45:11.224092    8056 api_server.go:182] apiserver freezer: "7:freezer:/docker/37cf4eb8675acead529d3bc1f61b52c8078a203827ec3b7c01d5157ba3754cd4/kubepods/burstable/pod9b1e5e64a7bbd87219dc3630e1636cb6/819a35b51107e98f92346aa919d917bb140e9e7f1a79b3b0f7e6431fc050f281"
	I0923 11:45:11.247612    8056 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/37cf4eb8675acead529d3bc1f61b52c8078a203827ec3b7c01d5157ba3754cd4/kubepods/burstable/pod9b1e5e64a7bbd87219dc3630e1636cb6/819a35b51107e98f92346aa919d917bb140e9e7f1a79b3b0f7e6431fc050f281/freezer.state
	I0923 11:45:11.271638    8056 api_server.go:204] freezer state: "THAWED"
	I0923 11:45:11.271755    8056 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54949/healthz ...
	I0923 11:45:11.282761    8056 api_server.go:279] https://127.0.0.1:54949/healthz returned 200:
	ok
	I0923 11:45:11.282761    8056 status.go:456] ha-667600-m03 apiserver status = Running (err=<nil>)
	I0923 11:45:11.282761    8056 status.go:176] ha-667600-m03 status: &{Name:ha-667600-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 11:45:11.282761    8056 status.go:174] checking status of ha-667600-m04 ...
	I0923 11:45:11.298787    8056 cli_runner.go:164] Run: docker container inspect ha-667600-m04 --format={{.State.Status}}
	I0923 11:45:11.382738    8056 status.go:364] ha-667600-m04 host status = "Running" (err=<nil>)
	I0923 11:45:11.382738    8056 host.go:66] Checking if "ha-667600-m04" exists ...
	I0923 11:45:11.391826    8056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-667600-m04
	I0923 11:45:11.469757    8056 host.go:66] Checking if "ha-667600-m04" exists ...
	I0923 11:45:11.483739    8056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 11:45:11.491918    8056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-667600-m04
	I0923 11:45:11.570569    8056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55266 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\ha-667600-m04\id_rsa Username:docker}
	I0923 11:45:11.720049    8056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 11:45:11.746079    8056 status.go:176] ha-667600-m04 status: &{Name:ha-667600-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (2.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.2580109s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (2.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (147.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 node start m02 -v=7 --alsologtostderr
E0923 11:45:48.829702   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:46:18.613572   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-windows-amd64.exe -p ha-667600 node start m02 -v=7 --alsologtostderr: (2m24.6356139s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-windows-amd64.exe -p ha-667600 status -v=7 --alsologtostderr: (2.8317945s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
E0923 11:47:41.690898   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (147.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.9449654s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (220.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-windows-amd64.exe node list -p ha-667600 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-windows-amd64.exe stop -p ha-667600 -v=7 --alsologtostderr
E0923 11:48:04.955776   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-windows-amd64.exe stop -p ha-667600 -v=7 --alsologtostderr: (37.9939044s)
ha_test.go:467: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-667600 --wait=true -v=7 --alsologtostderr
E0923 11:48:32.672816   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:18.614224   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-667600 --wait=true -v=7 --alsologtostderr: (3m1.7009176s)
ha_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe node list -p ha-667600
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (220.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-windows-amd64.exe -p ha-667600 node delete m03 -v=7 --alsologtostderr: (14.712603s)
ha_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Done: out/minikube-windows-amd64.exe -p ha-667600 status -v=7 --alsologtostderr: (2.126855s)
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.1709954s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-windows-amd64.exe -p ha-667600 stop -v=7 --alsologtostderr: (36.0017048s)
ha_test.go:537: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-667600 status -v=7 --alsologtostderr: exit status 7 (520.4755ms)

                                                
                                                
-- stdout --
	ha-667600
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-667600-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-667600-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 11:52:20.510272    7184 out.go:345] Setting OutFile to fd 1564 ...
	I0923 11:52:20.595068    7184 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:52:20.595068    7184 out.go:358] Setting ErrFile to fd 1188...
	I0923 11:52:20.595068    7184 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:52:20.609809    7184 out.go:352] Setting JSON to false
	I0923 11:52:20.610353    7184 mustload.go:65] Loading cluster: ha-667600
	I0923 11:52:20.610353    7184 notify.go:220] Checking for updates...
	I0923 11:52:20.610612    7184 config.go:182] Loaded profile config "ha-667600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:52:20.611168    7184 status.go:174] checking status of ha-667600 ...
	I0923 11:52:20.630335    7184 cli_runner.go:164] Run: docker container inspect ha-667600 --format={{.State.Status}}
	I0923 11:52:20.712403    7184 status.go:364] ha-667600 host status = "Stopped" (err=<nil>)
	I0923 11:52:20.712403    7184 status.go:377] host is not running, skipping remaining checks
	I0923 11:52:20.712403    7184 status.go:176] ha-667600 status: &{Name:ha-667600 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 11:52:20.712403    7184 status.go:174] checking status of ha-667600-m02 ...
	I0923 11:52:20.729037    7184 cli_runner.go:164] Run: docker container inspect ha-667600-m02 --format={{.State.Status}}
	I0923 11:52:20.807877    7184 status.go:364] ha-667600-m02 host status = "Stopped" (err=<nil>)
	I0923 11:52:20.808884    7184 status.go:377] host is not running, skipping remaining checks
	I0923 11:52:20.808884    7184 status.go:176] ha-667600-m02 status: &{Name:ha-667600-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 11:52:20.808884    7184 status.go:174] checking status of ha-667600-m04 ...
	I0923 11:52:20.822877    7184 cli_runner.go:164] Run: docker container inspect ha-667600-m04 --format={{.State.Status}}
	I0923 11:52:20.896080    7184 status.go:364] ha-667600-m04 host status = "Stopped" (err=<nil>)
	I0923 11:52:20.896424    7184 status.go:377] host is not running, skipping remaining checks
	I0923 11:52:20.896424    7184 status.go:176] ha-667600-m04 status: &{Name:ha-667600-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (118.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-667600 --wait=true -v=7 --alsologtostderr --driver=docker
E0923 11:53:04.956793   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-667600 --wait=true -v=7 --alsologtostderr --driver=docker: (1m55.7123274s)
ha_test.go:566: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 status -v=7 --alsologtostderr
ha_test.go:566: (dbg) Done: out/minikube-windows-amd64.exe -p ha-667600 status -v=7 --alsologtostderr: (2.2681146s)
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (118.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.3598061s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-667600 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-667600 --control-plane -v=7 --alsologtostderr: (1m17.4765619s)
ha_test.go:611: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-667600 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-windows-amd64.exe -p ha-667600 status -v=7 --alsologtostderr: (2.7693829s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.8279573s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.83s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (64.09s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-995600 --driver=docker
E0923 11:56:18.616440   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-995600 --driver=docker: (1m4.0852111s)
--- PASS: TestImageBuild/serial/Setup (64.09s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (5.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-995600
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-995600: (5.7318622s)
--- PASS: TestImageBuild/serial/NormalBuild (5.73s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (2.68s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-995600
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-995600: (2.6814308s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (2.68s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.68s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-995600
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-995600: (1.6800217s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.68s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-995600
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-995600: (1.7981483s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.80s)

                                                
                                    
x
+
TestJSONOutput/start/Command (98.72s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-132200 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
E0923 11:58:04.958066   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-132200 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (1m38.7148943s)
--- PASS: TestJSONOutput/start/Command (98.72s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.43s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-132200 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-132200 --output=json --user=testUser: (1.4333726s)
--- PASS: TestJSONOutput/pause/Command (1.43s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.3s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-132200 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-132200 --output=json --user=testUser: (1.3042183s)
--- PASS: TestJSONOutput/unpause/Command (1.30s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.21s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-132200 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-132200 --output=json --user=testUser: (7.2053741s)
--- PASS: TestJSONOutput/stop/Command (7.21s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.96s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-128600 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-128600 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (281.9929ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"946c5343-5111-4cd6-8081-3c574a89bc0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-128600] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1faf49d9-d249-4e0c-8132-cf0fc5360bdc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"2a4a05ee-3f57-4eae-94f1-abec8f24aa11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7a49b32f-855c-4015-b999-a51364192ba7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"c576428a-cf06-4626-a1af-8b4d37586564","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19690"}}
	{"specversion":"1.0","id":"8a3ad46b-c7c8-41ee-99c4-b0dec6278e90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"38e86e30-0411-49e6-9ec2-8ffb0f8b5d50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-128600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-128600
--- PASS: TestErrorJSONOutput (0.96s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (73.41s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-528600 --network=
E0923 11:59:28.039325   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-528600 --network=: (1m9.0854804s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-528600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-528600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-528600: (4.2227028s)
--- PASS: TestKicCustomNetwork/create_custom_network (73.41s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (70.28s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-746200 --network=bridge
E0923 12:01:18.617217   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-746200 --network=bridge: (1m6.5420837s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-746200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-746200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-746200: (3.6210823s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (70.28s)

                                                
                                    
x
+
TestKicExistingNetwork (69.67s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0923 12:01:38.714940   13200 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0923 12:01:38.798209   13200 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0923 12:01:38.806707   13200 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0923 12:01:38.807228   13200 cli_runner.go:164] Run: docker network inspect existing-network
W0923 12:01:38.884452   13200 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0923 12:01:38.884452   13200 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0923 12:01:38.885005   13200 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0923 12:01:38.893966   13200 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0923 12:01:38.991439   13200 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000334a0}
I0923 12:01:38.995210   13200 network_create.go:124] attempt to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0923 12:01:39.001858   13200 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
W0923 12:01:39.088850   13200 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network returned with exit code 1
W0923 12:01:39.088850   13200 network_create.go:149] failed to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
W0923 12:01:39.088850   13200 network_create.go:116] failed to create docker network existing-network 192.168.49.0/24, will retry: subnet is taken
I0923 12:01:39.119646   13200 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I0923 12:01:39.138226   13200 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000af2600}
I0923 12:01:39.138226   13200 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0923 12:01:39.146068   13200 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0923 12:01:39.338481   13200 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-481700 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-481700 --network=existing-network: (1m5.5178631s)
helpers_test.go:175: Cleaning up "existing-network-481700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-481700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-481700: (3.337258s)
I0923 12:02:48.298949   13200 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (69.67s)

                                                
                                    
x
+
TestKicCustomSubnet (70.37s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-514200 --subnet=192.168.60.0/24
E0923 12:03:04.960064   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-514200 --subnet=192.168.60.0/24: (1m6.1047044s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-514200 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-514200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-514200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-514200: (4.1643355s)
--- PASS: TestKicCustomSubnet (70.37s)

                                                
                                    
x
+
TestKicStaticIP (68.92s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p static-ip-488100 --static-ip=192.168.200.200
E0923 12:04:21.699824   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p static-ip-488100 --static-ip=192.168.200.200: (1m4.3912496s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe -p static-ip-488100 ip
helpers_test.go:175: Cleaning up "static-ip-488100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p static-ip-488100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p static-ip-488100: (4.0398336s)
--- PASS: TestKicStaticIP (68.92s)

                                                
                                    
x
+
TestMainNoArgs (0.24s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.24s)

                                                
                                    
x
+
TestMinikubeProfile (136.48s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-538600 --driver=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-538600 --driver=docker: (1m1.2150844s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-538600 --driver=docker
E0923 12:06:18.619042   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-538600 --driver=docker: (1m1.5049156s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-538600
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (2.0416991s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-538600
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (2.2148607s)
helpers_test.go:175: Cleaning up "second-538600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-538600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-538600: (4.430635s)
helpers_test.go:175: Cleaning up "first-538600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-538600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-538600: (4.3089961s)
--- PASS: TestMinikubeProfile (136.48s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (19.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-901300 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-901300 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (18.4228586s)
--- PASS: TestMountStart/serial/StartWithMountFirst (19.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.78s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-901300 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.78s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (17.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-901300 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-901300 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (16.5485145s)
--- PASS: TestMountStart/serial/StartWithMountSecond (17.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.77s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-901300 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.77s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.82s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-901300 --alsologtostderr -v=5
E0923 12:08:04.962370   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-901300 --alsologtostderr -v=5: (2.8160367s)
--- PASS: TestMountStart/serial/DeleteFirst (2.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.75s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-901300 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.75s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.06s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-901300
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-901300: (2.0523371s)
--- PASS: TestMountStart/serial/Stop (2.06s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (12.4s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-901300
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-901300: (11.3976002s)
--- PASS: TestMountStart/serial/RestartStopped (12.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.77s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-901300 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (147.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-843200 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-843200 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (2m25.3738879s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-843200 status --alsologtostderr: (1.8072958s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (147.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (37.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-843200 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-843200 -- rollout status deployment/busybox
E0923 12:11:18.621271   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-843200 -- rollout status deployment/busybox: (31.015953s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-843200 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-843200 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-843200 -- exec busybox-7dff88458-6r28t -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-843200 -- exec busybox-7dff88458-6r28t -- nslookup kubernetes.io: (1.6735266s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-843200 -- exec busybox-7dff88458-fn6j5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-843200 -- exec busybox-7dff88458-fn6j5 -- nslookup kubernetes.io: (1.573703s)
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-843200 -- exec busybox-7dff88458-6r28t -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-843200 -- exec busybox-7dff88458-fn6j5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-843200 -- exec busybox-7dff88458-6r28t -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-843200 -- exec busybox-7dff88458-fn6j5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (37.97s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (2.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-843200 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-843200 -- exec busybox-7dff88458-6r28t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-843200 -- exec busybox-7dff88458-6r28t -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-843200 -- exec busybox-7dff88458-fn6j5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-843200 -- exec busybox-7dff88458-fn6j5 -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (2.56s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-843200 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-843200 -v 3 --alsologtostderr: (46.4626273s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-843200 status --alsologtostderr: (2.3502191s)
--- PASS: TestMultiNode/serial/AddNode (48.81s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-843200 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.936844s)
--- PASS: TestMultiNode/serial/ProfileList (1.94s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (26.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-843200 status --output json --alsologtostderr: (1.8720882s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 cp testdata\cp-test.txt multinode-843200:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 ssh -n multinode-843200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 cp multinode-843200:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile3534072328\001\cp-test_multinode-843200.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 ssh -n multinode-843200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 cp multinode-843200:/home/docker/cp-test.txt multinode-843200-m02:/home/docker/cp-test_multinode-843200_multinode-843200-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-843200 cp multinode-843200:/home/docker/cp-test.txt multinode-843200-m02:/home/docker/cp-test_multinode-843200_multinode-843200-m02.txt: (1.072102s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 ssh -n multinode-843200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 ssh -n multinode-843200-m02 "sudo cat /home/docker/cp-test_multinode-843200_multinode-843200-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 cp multinode-843200:/home/docker/cp-test.txt multinode-843200-m03:/home/docker/cp-test_multinode-843200_multinode-843200-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-843200 cp multinode-843200:/home/docker/cp-test.txt multinode-843200-m03:/home/docker/cp-test_multinode-843200_multinode-843200-m03.txt: (1.0706547s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 ssh -n multinode-843200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 ssh -n multinode-843200-m03 "sudo cat /home/docker/cp-test_multinode-843200_multinode-843200-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 cp testdata\cp-test.txt multinode-843200-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 ssh -n multinode-843200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 cp multinode-843200-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile3534072328\001\cp-test_multinode-843200-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 ssh -n multinode-843200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 cp multinode-843200-m02:/home/docker/cp-test.txt multinode-843200:/home/docker/cp-test_multinode-843200-m02_multinode-843200.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-843200 cp multinode-843200-m02:/home/docker/cp-test.txt multinode-843200:/home/docker/cp-test_multinode-843200-m02_multinode-843200.txt: (1.083202s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 ssh -n multinode-843200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 ssh -n multinode-843200 "sudo cat /home/docker/cp-test_multinode-843200-m02_multinode-843200.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 cp multinode-843200-m02:/home/docker/cp-test.txt multinode-843200-m03:/home/docker/cp-test_multinode-843200-m02_multinode-843200-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-843200 cp multinode-843200-m02:/home/docker/cp-test.txt multinode-843200-m03:/home/docker/cp-test_multinode-843200-m02_multinode-843200-m03.txt: (1.0799941s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 ssh -n multinode-843200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 ssh -n multinode-843200-m03 "sudo cat /home/docker/cp-test_multinode-843200-m02_multinode-843200-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 cp testdata\cp-test.txt multinode-843200-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 ssh -n multinode-843200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 cp multinode-843200-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile3534072328\001\cp-test_multinode-843200-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 ssh -n multinode-843200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 cp multinode-843200-m03:/home/docker/cp-test.txt multinode-843200:/home/docker/cp-test_multinode-843200-m03_multinode-843200.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-843200 cp multinode-843200-m03:/home/docker/cp-test.txt multinode-843200:/home/docker/cp-test_multinode-843200-m03_multinode-843200.txt: (1.1598191s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 ssh -n multinode-843200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 ssh -n multinode-843200 "sudo cat /home/docker/cp-test_multinode-843200-m03_multinode-843200.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 cp multinode-843200-m03:/home/docker/cp-test.txt multinode-843200-m02:/home/docker/cp-test_multinode-843200-m03_multinode-843200-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-843200 cp multinode-843200-m03:/home/docker/cp-test.txt multinode-843200-m02:/home/docker/cp-test_multinode-843200-m03_multinode-843200-m02.txt: (1.0682358s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 ssh -n multinode-843200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 ssh -n multinode-843200-m02 "sudo cat /home/docker/cp-test_multinode-843200-m03_multinode-843200-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (26.52s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (4.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-843200 node stop m03: (1.9544582s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-843200 status: exit status 7 (1.4442803s)

                                                
                                                
-- stdout --
	multinode-843200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-843200-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-843200-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-843200 status --alsologtostderr: exit status 7 (1.4484439s)

                                                
                                                
-- stdout --
	multinode-843200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-843200-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-843200-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:12:54.246632   11060 out.go:345] Setting OutFile to fd 1292 ...
	I0923 12:12:54.319256   11060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:12:54.319835   11060 out.go:358] Setting ErrFile to fd 1432...
	I0923 12:12:54.319835   11060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:12:54.333834   11060 out.go:352] Setting JSON to false
	I0923 12:12:54.333834   11060 mustload.go:65] Loading cluster: multinode-843200
	I0923 12:12:54.333834   11060 notify.go:220] Checking for updates...
	I0923 12:12:54.334859   11060 config.go:182] Loaded profile config "multinode-843200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:12:54.334859   11060 status.go:174] checking status of multinode-843200 ...
	I0923 12:12:54.350829   11060 cli_runner.go:164] Run: docker container inspect multinode-843200 --format={{.State.Status}}
	I0923 12:12:54.433593   11060 status.go:364] multinode-843200 host status = "Running" (err=<nil>)
	I0923 12:12:54.433593   11060 host.go:66] Checking if "multinode-843200" exists ...
	I0923 12:12:54.442597   11060 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-843200
	I0923 12:12:54.513635   11060 host.go:66] Checking if "multinode-843200" exists ...
	I0923 12:12:54.525594   11060 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 12:12:54.532658   11060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-843200
	I0923 12:12:54.597599   11060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56901 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\multinode-843200\id_rsa Username:docker}
	I0923 12:12:54.744204   11060 ssh_runner.go:195] Run: systemctl --version
	I0923 12:12:54.768210   11060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:12:54.801710   11060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-843200
	I0923 12:12:54.872624   11060 kubeconfig.go:125] found "multinode-843200" server: "https://127.0.0.1:56900"
	I0923 12:12:54.872624   11060 api_server.go:166] Checking apiserver status ...
	I0923 12:12:54.883915   11060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:12:54.922422   11060 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2473/cgroup
	I0923 12:12:54.947876   11060 api_server.go:182] apiserver freezer: "7:freezer:/docker/6870c6bddf8aa10002f3d21eb7437e1849254ad78c45f58cc10dcbd211189948/kubepods/burstable/pod1f373b1e457bf6c0c26c496048d3a85a/06a4450a481a0b0292da83b17313edf3a29868f5697ab75f8c8a029d0e516b51"
	I0923 12:12:54.960241   11060 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6870c6bddf8aa10002f3d21eb7437e1849254ad78c45f58cc10dcbd211189948/kubepods/burstable/pod1f373b1e457bf6c0c26c496048d3a85a/06a4450a481a0b0292da83b17313edf3a29868f5697ab75f8c8a029d0e516b51/freezer.state
	I0923 12:12:54.983656   11060 api_server.go:204] freezer state: "THAWED"
	I0923 12:12:54.983835   11060 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56900/healthz ...
	I0923 12:12:55.001004   11060 api_server.go:279] https://127.0.0.1:56900/healthz returned 200:
	ok
	I0923 12:12:55.001004   11060 status.go:456] multinode-843200 apiserver status = Running (err=<nil>)
	I0923 12:12:55.001004   11060 status.go:176] multinode-843200 status: &{Name:multinode-843200 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 12:12:55.001004   11060 status.go:174] checking status of multinode-843200-m02 ...
	I0923 12:12:55.017799   11060 cli_runner.go:164] Run: docker container inspect multinode-843200-m02 --format={{.State.Status}}
	I0923 12:12:55.105265   11060 status.go:364] multinode-843200-m02 host status = "Running" (err=<nil>)
	I0923 12:12:55.105265   11060 host.go:66] Checking if "multinode-843200-m02" exists ...
	I0923 12:12:55.115099   11060 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-843200-m02
	I0923 12:12:55.185471   11060 host.go:66] Checking if "multinode-843200-m02" exists ...
	I0923 12:12:55.197480   11060 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 12:12:55.204540   11060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-843200-m02
	I0923 12:12:55.279747   11060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56974 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\multinode-843200-m02\id_rsa Username:docker}
	I0923 12:12:55.421700   11060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:12:55.456725   11060 status.go:176] multinode-843200-m02 status: &{Name:multinode-843200-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0923 12:12:55.456725   11060 status.go:174] checking status of multinode-843200-m03 ...
	I0923 12:12:55.477083   11060 cli_runner.go:164] Run: docker container inspect multinode-843200-m03 --format={{.State.Status}}
	I0923 12:12:55.549613   11060 status.go:364] multinode-843200-m03 host status = "Stopped" (err=<nil>)
	I0923 12:12:55.549613   11060 status.go:377] host is not running, skipping remaining checks
	I0923 12:12:55.549613   11060 status.go:176] multinode-843200-m03 status: &{Name:multinode-843200-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (4.85s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (18.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 node start m03 -v=7 --alsologtostderr
E0923 12:13:04.964190   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-843200 node start m03 -v=7 --alsologtostderr: (16.2580037s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-843200 status -v=7 --alsologtostderr: (1.8527877s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (18.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (116.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-843200
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-843200
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-843200: (24.8660214s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-843200 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-843200 --wait=true -v=8 --alsologtostderr: (1m30.8341216s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-843200
--- PASS: TestMultiNode/serial/RestartKeepsNodes (116.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (9.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-843200 node delete m03: (8.1463804s)
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 status --alsologtostderr
multinode_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-843200 status --alsologtostderr: (1.3158442s)
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (9.97s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 stop
multinode_test.go:345: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-843200 stop: (23.5199064s)
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-843200 status: exit status 7 (409.3481ms)

                                                
                                                
-- stdout --
	multinode-843200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-843200-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-843200 status --alsologtostderr: exit status 7 (396.9498ms)

                                                
                                                
-- stdout --
	multinode-843200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-843200-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:15:44.053547   10088 out.go:345] Setting OutFile to fd 1184 ...
	I0923 12:15:44.131296   10088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:15:44.131296   10088 out.go:358] Setting ErrFile to fd 1220...
	I0923 12:15:44.131296   10088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:15:44.146287   10088 out.go:352] Setting JSON to false
	I0923 12:15:44.146287   10088 mustload.go:65] Loading cluster: multinode-843200
	I0923 12:15:44.146287   10088 notify.go:220] Checking for updates...
	I0923 12:15:44.147295   10088 config.go:182] Loaded profile config "multinode-843200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:15:44.147295   10088 status.go:174] checking status of multinode-843200 ...
	I0923 12:15:44.163284   10088 cli_runner.go:164] Run: docker container inspect multinode-843200 --format={{.State.Status}}
	I0923 12:15:44.236559   10088 status.go:364] multinode-843200 host status = "Stopped" (err=<nil>)
	I0923 12:15:44.236559   10088 status.go:377] host is not running, skipping remaining checks
	I0923 12:15:44.236559   10088 status.go:176] multinode-843200 status: &{Name:multinode-843200 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 12:15:44.236559   10088 status.go:174] checking status of multinode-843200-m02 ...
	I0923 12:15:44.252729   10088 cli_runner.go:164] Run: docker container inspect multinode-843200-m02 --format={{.State.Status}}
	I0923 12:15:44.319767   10088 status.go:364] multinode-843200-m02 host status = "Stopped" (err=<nil>)
	I0923 12:15:44.319873   10088 status.go:377] host is not running, skipping remaining checks
	I0923 12:15:44.319873   10088 status.go:176] multinode-843200-m02 status: &{Name:multinode-843200-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.33s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (74.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-843200 --wait=true -v=8 --alsologtostderr --driver=docker
E0923 12:16:08.048297   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:16:18.623572   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-843200 --wait=true -v=8 --alsologtostderr --driver=docker: (1m12.2189443s)
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-843200 status --alsologtostderr
multinode_test.go:382: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-843200 status --alsologtostderr: (1.4350119s)
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (74.14s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (67.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-843200
multinode_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-843200-m02 --driver=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-843200-m02 --driver=docker: exit status 14 (294.4943ms)

                                                
                                                
-- stdout --
	* [multinode-843200-m02] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-843200-m02' is duplicated with machine name 'multinode-843200-m02' in profile 'multinode-843200'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-843200-m03 --driver=docker
multinode_test.go:472: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-843200-m03 --driver=docker: (1m1.2984568s)
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-843200
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-843200: exit status 80 (824.5405ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-843200 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-843200-m03 already exists in multinode-843200-m03 profile
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_e3f75f9fdd712fd5423563a6a11e787bf6359068_1.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-843200-m03
E0923 12:18:04.966076   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-843200-m03: (4.700131s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (67.36s)

                                                
                                    
x
+
TestPreload (172.58s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-058100 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-058100 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4: (1m53.3623424s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-058100 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-058100 image pull gcr.io/k8s-minikube/busybox: (2.1752078s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-058100
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-058100: (12.2793313s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-058100 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-058100 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker: (39.9875345s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-058100 image list
E0923 12:21:01.708649   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:175: Cleaning up "test-preload-058100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-058100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-058100: (4.1247675s)
--- PASS: TestPreload (172.58s)

                                                
                                    
x
+
TestScheduledStopWindows (133s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-258100 --memory=2048 --driver=docker
E0923 12:21:18.625641   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-258100 --memory=2048 --driver=docker: (1m3.9434026s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-258100 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-258100 --schedule 5m: (1.4287803s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-258100 -n scheduled-stop-258100
scheduled_stop_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-258100 -n scheduled-stop-258100: (1.1421944s)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-258100 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-258100 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-258100 --schedule 5s: (1.80227s)
E0923 12:23:04.968993   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-258100
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-258100: exit status 7 (334.4438ms)

                                                
                                                
-- stdout --
	scheduled-stop-258100
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-258100 -n scheduled-stop-258100
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-258100 -n scheduled-stop-258100: exit status 7 (335.9435ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-258100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-258100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-258100: (3.0576902s)
--- PASS: TestScheduledStopWindows (133.00s)

                                                
                                    
x
+
TestInsufficientStorage (42.25s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-250800 --memory=2048 --output=json --wait=true --driver=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-250800 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (37.4917596s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"879dc0fa-0a5d-4429-8649-bf3d879f4ee5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-250800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0db0849e-2a62-4d9a-a2a7-538ce456bb58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"876180f3-61ca-49b2-92e3-66a99aed1169","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"051eee51-8856-41d8-b600-07fd6052cf6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"253ec144-5f37-49db-93ac-8c28e6d32182","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19690"}}
	{"specversion":"1.0","id":"1f2f9d6a-e485-4a32-a925-4fe328e881ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0b61985f-6c11-4f95-9cf8-cdba431a605f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a577d142-0553-4f03-b4f5-0b536f2086de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"3365468e-9e3c-4221-8cb4-4a8577f6a5d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8fa508db-f676-4d1c-b68e-4a3904056a22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"40de65ed-7886-4aef-9b2e-7561dc9dcfc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-250800\" primary control-plane node in \"insufficient-storage-250800\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"05549e02-8294-4b38-862c-0148e1f9c826","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726784731-19672 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"6dd11e31-ea1b-461c-8fcd-9e2c69213bb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d92e6da6-9093-4961-ade0-403db1b1678b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-250800 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-250800 --output=json --layout=cluster: exit status 7 (821.9961ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-250800","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-250800","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 12:23:57.622153    8964 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-250800" does not appear in C:\Users\jenkins.minikube2\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-250800 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-250800 --output=json --layout=cluster: exit status 7 (824.0937ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-250800","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-250800","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 12:23:58.450985    5400 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-250800" does not appear in C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	E0923 12:23:58.487256    5400 status.go:258] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\insufficient-storage-250800\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-250800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-250800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-250800: (3.1023313s)
--- PASS: TestInsufficientStorage (42.25s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (188.72s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.26.0.2238951773.exe start -p running-upgrade-051700 --memory=2200 --vm-driver=docker
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.26.0.2238951773.exe start -p running-upgrade-051700 --memory=2200 --vm-driver=docker: (1m49.2603902s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-051700 --memory=2200 --alsologtostderr -v=1 --driver=docker
E0923 12:31:18.630336   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-051700 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m13.8561842s)
helpers_test.go:175: Cleaning up "running-upgrade-051700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-051700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-051700: (4.9469413s)
--- PASS: TestRunningBinaryUpgrade (188.72s)

                                                
                                    
x
+
TestKubernetesUpgrade (239.37s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-214700 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-214700 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker: (1m48.0721692s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-214700
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-214700: (9.0437692s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-214700 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-214700 status --format={{.Host}}: exit status 7 (398.9629ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-214700 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker
E0923 12:28:04.970416   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-214700 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker: (1m13.8016428s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-214700 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-214700 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-214700 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker: exit status 106 (358.6533ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-214700] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-214700
	    minikube start -p kubernetes-upgrade-214700 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2147002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-214700 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-214700 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-214700 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker: (40.5031889s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-214700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-214700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-214700: (6.992086s)
--- PASS: TestKubernetesUpgrade (239.37s)

                                                
                                    
x
+
TestMissingContainerUpgrade (394.02s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.26.0.3848105304.exe start -p missing-upgrade-473500 --memory=2200 --driver=docker
version_upgrade_test.go:309: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.26.0.3848105304.exe start -p missing-upgrade-473500 --memory=2200 --driver=docker: (4m1.7612088s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-473500
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-473500: (10.9664298s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-473500
version_upgrade_test.go:329: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-473500 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-473500 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m13.9242729s)
helpers_test.go:175: Cleaning up "missing-upgrade-473500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-473500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-473500: (6.0658923s)
--- PASS: TestMissingContainerUpgrade (394.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-770200 --no-kubernetes --kubernetes-version=1.20 --driver=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-770200 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (423.7773ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-770200] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (102.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-770200 --driver=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-770200 --driver=docker: (1m41.515666s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-770200 status -o json
no_kubernetes_test.go:200: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-770200 status -o json: (1.1258151s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (102.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (28.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-770200 --no-kubernetes --driver=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-770200 --no-kubernetes --driver=docker: (23.6813975s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-770200 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-770200 status -o json: exit status 2 (855.4343ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-770200","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-770200
no_kubernetes_test.go:124: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-770200: (4.2188889s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (28.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (30.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-770200 --no-kubernetes --driver=docker
E0923 12:26:18.627513   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-770200 --no-kubernetes --driver=docker: (30.7426913s)
--- PASS: TestNoKubernetes/serial/Start (30.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-770200 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-770200 "sudo systemctl is-active --quiet service kubelet": exit status 1 (795.6617ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-windows-amd64.exe profile list: (2.2813375s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (2.3855802s)
--- PASS: TestNoKubernetes/serial/ProfileList (4.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-770200
no_kubernetes_test.go:158: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-770200: (2.6108779s)
--- PASS: TestNoKubernetes/serial/Stop (2.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (20.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-770200 --driver=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-770200 --driver=docker: (20.2043235s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (20.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-770200 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-770200 "sudo systemctl is-active --quiet service kubelet": exit status 1 (894.7996ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (263.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.26.0.1183433147.exe start -p stopped-upgrade-902800 --memory=2200 --vm-driver=docker
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.26.0.1183433147.exe start -p stopped-upgrade-902800 --memory=2200 --vm-driver=docker: (2m37.278613s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.26.0.1183433147.exe -p stopped-upgrade-902800 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.26.0.1183433147.exe -p stopped-upgrade-902800 stop: (13.88448s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-902800 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-902800 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m32.3129304s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (263.48s)

                                                
                                    
x
+
TestPause/serial/Start (149.19s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-050600 --memory=2048 --install-addons=false --wait=all --driver=docker
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-050600 --memory=2048 --install-addons=false --wait=all --driver=docker: (2m29.1939278s)
--- PASS: TestPause/serial/Start (149.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-902800
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-902800: (3.4262086s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.43s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (37.16s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-050600 --alsologtostderr -v=1 --driver=docker
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-050600 --alsologtostderr -v=1 --driver=docker: (37.1358294s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (37.16s)

                                                
                                    
x
+
TestPause/serial/Pause (1.68s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-050600 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-050600 --alsologtostderr -v=5: (1.6760803s)
--- PASS: TestPause/serial/Pause (1.68s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.99s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-050600 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-050600 --output=json --layout=cluster: exit status 2 (985.9912ms)

                                                
                                                
-- stdout --
	{"Name":"pause-050600","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-050600","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.99s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.56s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-050600 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-050600 --alsologtostderr -v=5: (1.557173s)
--- PASS: TestPause/serial/Unpause (1.56s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.75s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-050600 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-050600 --alsologtostderr -v=5: (1.7507245s)
--- PASS: TestPause/serial/PauseAgain (1.75s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.98s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-050600 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-050600 --alsologtostderr -v=5: (5.9799994s)
--- PASS: TestPause/serial/DeletePaused (5.98s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (2.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.0224059s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-050600
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-050600: exit status 1 (86.026ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-050600: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (2.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (232.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-694600 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-694600 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0: (3m52.3742558s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (232.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (133.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-732200 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-732200 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.1: (2m13.8594388s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (133.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (116.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-558800 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-558800 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.31.1: (1m56.7545379s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (116.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (78.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-067300 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-067300 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.1: (1m18.1453047s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (78.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-732200 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [56764901-72aa-44b1-b130-ed979fa6a2d4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [56764901-72aa-44b1-b130-ed979fa6a2d4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.0094065s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-732200 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-067300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-067300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.2132019s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-067300 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-067300 --alsologtostderr -v=3: (13.0157515s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-558800 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c0c9a19a-e6e1-45f7-b9bd-919c1366d080] Pending
helpers_test.go:344: "busybox" [c0c9a19a-e6e1-45f7-b9bd-919c1366d080] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c0c9a19a-e6e1-45f7-b9bd-919c1366d080] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.0192932s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-558800 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-732200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-732200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.0096996s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-732200 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-732200 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-732200 --alsologtostderr -v=3: (12.9298636s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-558800 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-558800 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.6668688s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-558800 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-067300 -n newest-cni-067300
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-067300 -n newest-cni-067300: exit status 7 (351.092ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-067300 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (41.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-067300 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-067300 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.1: (39.2579132s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-067300 -n newest-cni-067300
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-067300 -n newest-cni-067300: (1.8838748s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (41.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-558800 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-558800 --alsologtostderr -v=3: (13.0491287s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-732200 -n no-preload-732200
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-732200 -n no-preload-732200: exit status 7 (345.1518ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-732200 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (329.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-732200 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-732200 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.1: (5m28.0815172s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-732200 -n no-preload-732200
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-732200 -n no-preload-732200: (1.3985032s)
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (329.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-558800 -n default-k8s-diff-port-558800
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-558800 -n default-k8s-diff-port-558800: exit status 7 (373.8499ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-558800 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (297.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-558800 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.31.1
E0923 12:36:18.632842   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-558800 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.31.1: (4m56.204353s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-558800 -n default-k8s-diff-port-558800
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-558800 -n default-k8s-diff-port-558800: (1.013814s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (297.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-067300 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (12.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-067300 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-067300 --alsologtostderr -v=1: (2.1212466s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-067300 -n newest-cni-067300
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-067300 -n newest-cni-067300: exit status 2 (1.5346473s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-067300 -n newest-cni-067300
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-067300 -n newest-cni-067300: exit status 2 (1.7647447s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-067300 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-067300 --alsologtostderr -v=1: (2.8126359s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-067300 -n newest-cni-067300
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-067300 -n newest-cni-067300: (2.160911s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-067300 -n newest-cni-067300
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-067300 -n newest-cni-067300: (2.1082835s)
--- PASS: TestStartStop/group/newest-cni/serial/Pause (12.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (95.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-648500 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-648500 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.31.1: (1m35.8823416s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (95.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (18.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-694600 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) Done: kubectl --context old-k8s-version-694600 create -f testdata\busybox.yaml: (3.4023649s)
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1ba5c37a-4c7f-4144-b6f2-f662da6e5452] Pending
helpers_test.go:344: "busybox" [1ba5c37a-4c7f-4144-b6f2-f662da6e5452] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1ba5c37a-4c7f-4144-b6f2-f662da6e5452] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 15.0090854s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-694600 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (18.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-694600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-694600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.1784898s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-694600 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-694600 --alsologtostderr -v=3
E0923 12:37:41.718086   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-694600 --alsologtostderr -v=3: (13.0118822s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-694600 -n old-k8s-version-694600
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-694600 -n old-k8s-version-694600: exit status 7 (405.6217ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-694600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-648500 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [07331c26-79f1-4eb5-a3e6-181fe3e9f217] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [07331c26-79f1-4eb5-a3e6-181fe3e9f217] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.0089299s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-648500 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-648500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-648500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.433333s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-648500 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-648500 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-648500 --alsologtostderr -v=3: (12.9297348s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-648500 -n embed-certs-648500
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-648500 -n embed-certs-648500: exit status 7 (344.2197ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-648500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (294.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-648500 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-648500 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.31.1: (4m53.5641124s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-648500 -n embed-certs-648500
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-648500 -n embed-certs-648500: (1.0665225s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (294.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qjfw2" [0b4ff7bd-e6a2-498b-b4b9-84db0e1848b0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.010061s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qjfw2" [0b4ff7bd-e6a2-498b-b4b9-84db0e1848b0] Running
E0923 12:41:18.634413   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0115878s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-558800 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-558800 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-558800 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-558800 --alsologtostderr -v=1: (1.648819s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-558800 -n default-k8s-diff-port-558800
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-558800 -n default-k8s-diff-port-558800: exit status 2 (964.1728ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-558800 -n default-k8s-diff-port-558800
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-558800 -n default-k8s-diff-port-558800: exit status 2 (972.5469ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-558800 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-558800 --alsologtostderr -v=1: (1.4374867s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-558800 -n default-k8s-diff-port-558800
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-558800 -n default-k8s-diff-port-558800: (1.3481341s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-558800 -n default-k8s-diff-port-558800
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-558800 -n default-k8s-diff-port-558800: (1.016638s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-sm9qq" [f43b77cc-4e5c-484e-b98c-08a2dec72d14] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0080769s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (100.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-579000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-579000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker: (1m40.0699779s)
--- PASS: TestNetworkPlugins/group/auto/Start (100.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-sm9qq" [f43b77cc-4e5c-484e-b98c-08a2dec72d14] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.007839s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-732200 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-732200 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-732200 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-732200 --alsologtostderr -v=1: (1.583541s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-732200 -n no-preload-732200
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-732200 -n no-preload-732200: exit status 2 (965.7258ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-732200 -n no-preload-732200
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-732200 -n no-preload-732200: exit status 2 (938.667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-732200 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-732200 --alsologtostderr -v=1: (1.4970663s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-732200 -n no-preload-732200
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-732200 -n no-preload-732200: (1.3952512s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-732200 -n no-preload-732200
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-732200 -n no-preload-732200: (1.0264231s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (7.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (161.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-579000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker
E0923 12:43:04.978030   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-579000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker: (2m41.3175463s)
--- PASS: TestNetworkPlugins/group/calico/Start (161.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-579000 "pgrep -a kubelet"
I0923 12:43:16.751502   13200 config.go:182] Loaded profile config "auto-579000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (21.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-579000 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z8vg9" [726d2603-45a2-496b-8b5b-2ec1cc95a8e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-z8vg9" [726d2603-45a2-496b-8b5b-2ec1cc95a8e2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 21.0092927s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (21.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-579000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-579000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-579000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-x94lh" [d8af11b5-a4ef-4997-a463-a671284925e8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0089336s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-x94lh" [d8af11b5-a4ef-4997-a463-a671284925e8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0088031s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-648500 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-648500 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (9.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-648500 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-648500 --alsologtostderr -v=1: (1.8867367s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-648500 -n embed-certs-648500
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-648500 -n embed-certs-648500: exit status 2 (1.1390026s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-648500 -n embed-certs-648500
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-648500 -n embed-certs-648500: exit status 2 (1.0623313s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-648500 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-648500 --alsologtostderr -v=1: (1.5130908s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-648500 -n embed-certs-648500
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-648500 -n embed-certs-648500: (1.3835194s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-648500 -n embed-certs-648500
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-648500 -n embed-certs-648500: (1.2395789s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (9.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (113.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-579000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-579000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker: (1m53.8440752s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (113.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (119.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-579000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-579000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker: (1m59.7615271s)
--- PASS: TestNetworkPlugins/group/false/Start (119.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bk8ls" [7cfb0ab3-b6f0-46f0-add3-5aabaf0973a9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.1412089s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-sf6t5" [104f3d55-4b0c-4b18-90b8-8cef7f9c60a2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.0105069s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bk8ls" [7cfb0ab3-b6f0-46f0-add3-5aabaf0973a9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.1745947s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-694600 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-579000 "pgrep -a kubelet"
I0923 12:44:54.072862   13200 config.go:182] Loaded profile config "calico-579000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (30.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-579000 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context calico-579000 replace --force -f testdata\netcat-deployment.yaml: (3.6341756s)
I0923 12:44:57.730252   13200 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I0923 12:44:57.997181   13200 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-v6bpv" [337fcc73-ab9b-478a-a190-aa135548c1d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-v6bpv" [337fcc73-ab9b-478a-a190-aa135548c1d8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 26.0131877s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (30.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-694600 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (14.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-694600 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-694600 --alsologtostderr -v=1: (5.887689s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-694600 -n old-k8s-version-694600
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-694600 -n old-k8s-version-694600: exit status 2 (1.176569s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-694600 -n old-k8s-version-694600
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-694600 -n old-k8s-version-694600: exit status 2 (1.2099892s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-694600 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-694600 --alsologtostderr -v=1: (2.627665s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-694600 -n old-k8s-version-694600
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-694600 -n old-k8s-version-694600: (2.2461755s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-694600 -n old-k8s-version-694600
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-694600 -n old-k8s-version-694600: (1.3159861s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (14.46s)
E0923 12:49:39.471421   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\auto-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (114.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-579000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-579000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker: (1m54.1742385s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (114.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-579000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-579000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-579000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (104.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-579000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker
E0923 12:46:18.225603   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-732200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:46:18.637021   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-827700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-579000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker: (1m44.1072573s)
--- PASS: TestNetworkPlugins/group/flannel/Start (104.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-579000 "pgrep -a kubelet"
I0923 12:46:23.058492   13200 config.go:182] Loaded profile config "custom-flannel-579000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (25.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-579000 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-p2t8p" [ec5a6427-62aa-4b22-b081-0451972b7ea2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 12:46:26.241483   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-558800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-p2t8p" [ec5a6427-62aa-4b22-b081-0451972b7ea2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 25.0088266s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (25.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (1.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-579000 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p false-579000 "pgrep -a kubelet": (1.777455s)
I0923 12:46:32.468399   13200 config.go:182] Loaded profile config "false-579000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (1.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (25.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-579000 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context false-579000 replace --force -f testdata\netcat-deployment.yaml: (2.2221733s)
I0923 12:46:34.733230   13200 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I0923 12:46:35.516155   13200 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8kj2j" [8ce06509-c5fd-479d-a455-920dd95edddc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8kj2j" [8ce06509-c5fd-479d-a455-920dd95edddc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 22.0114259s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (25.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-579000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-579000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-579000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-579000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-579000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-579000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0923 12:46:59.188142   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-732200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-5sd5n" [b7514ac9-ec6c-4aa1-b6ba-ac236db1fad8] Running
E0923 12:47:17.817084   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-694600\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:47:17.825083   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-694600\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:47:17.838031   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-694600\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:47:17.863087   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-694600\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:47:17.905836   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-694600\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:47:17.988756   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-694600\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:47:18.151751   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-694600\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:47:18.474753   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-694600\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:47:19.117449   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-694600\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:47:20.400445   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-694600\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0116161s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-579000 "pgrep -a kubelet"
I0923 12:47:21.671161   13200 config.go:182] Loaded profile config "kindnet-579000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (19.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-579000 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vf9lt" [bd0874f9-2b20-41d1-ac60-ce02f1ef8659] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 12:47:22.963955   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-694600\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:47:28.086373   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-694600\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-vf9lt" [bd0874f9-2b20-41d1-ac60-ce02f1ef8659] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 19.0108533s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (19.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-579000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-579000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-579000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (154.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-579000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-579000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker: (2m34.9291177s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (154.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (115.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-579000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker
E0923 12:47:58.811590   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-694600\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-579000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker: (1m55.275551s)
--- PASS: TestNetworkPlugins/group/bridge/Start (115.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-s2brb" [60b7b36a-5ba7-442d-9bdc-bc73fbeca966] Running
E0923 12:48:04.979851   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0094714s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-579000 "pgrep -a kubelet"
I0923 12:48:09.091943   13200 config.go:182] Loaded profile config "flannel-579000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (27.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-579000 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context flannel-579000 replace --force -f testdata\netcat-deployment.yaml: (7.8028987s)
I0923 12:48:16.916738   13200 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
E0923 12:48:17.521095   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\auto-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:48:17.527863   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\auto-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:48:17.540085   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\auto-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:48:17.562004   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\auto-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
E0923 12:48:17.604718   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\auto-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:48:17.686282   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\auto-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-z7hqd" [b0a3198c-a9ec-41e9-aa88-adf0ce856119] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 12:48:17.848609   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\auto-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:48:18.170852   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\auto-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:48:18.814393   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\auto-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:48:20.097115   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\auto-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:48:21.111245   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-732200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:48:22.660502   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\auto-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:48:27.782536   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\auto-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-z7hqd" [b0a3198c-a9ec-41e9-aa88-adf0ce856119] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 19.0122289s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (27.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (105.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-579000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-579000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker: (1m45.3543396s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (105.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-579000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-579000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-579000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-579000 "pgrep -a kubelet"
I0923 12:49:45.890715   13200 config.go:182] Loaded profile config "bridge-579000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (19.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-579000 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9x5t9" [75c821ff-95c1-4b02-b154-4a3c7ede241a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 12:49:47.294671   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\calico-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:49:47.301278   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\calico-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:49:47.314288   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\calico-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:49:47.336955   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\calico-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:49:47.379715   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\calico-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:49:47.462671   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\calico-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:49:47.626110   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\calico-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:49:47.949226   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\calico-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:49:48.591543   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\calico-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:49:49.874700   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\calico-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:49:52.437733   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\calico-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:49:57.561265   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\calico-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-9x5t9" [75c821ff-95c1-4b02-b154-4a3c7ede241a] Running
E0923 12:50:01.696972   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-694600\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 19.0081819s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (19.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-579000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-579000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-579000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-579000 "pgrep -a kubelet"
I0923 12:50:19.545444   13200 config.go:182] Loaded profile config "enable-default-cni-579000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (20.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-579000 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pc4m4" [b4635312-c763-47ce-aba4-bd42a3e03ab9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pc4m4" [b4635312-c763-47ce-aba4-bd42a3e03ab9] Running
E0923 12:50:37.237417   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-732200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 20.0112547s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (20.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-579000 "pgrep -a kubelet"
I0923 12:50:21.488861   13200 config.go:182] Loaded profile config "kubenet-579000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (21.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-579000 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-q9422" [ede53c5f-4a83-412b-b0ce-81eb8e0359d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 12:50:28.286627   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\calico-579000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-q9422" [ede53c5f-4a83-412b-b0ce-81eb8e0359d3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 21.0131485s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (21.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-579000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-579000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-579000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-579000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-579000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-579000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.34s)

                                                
                                    

Test skip (24/339)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-827700 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-827700 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-827700 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [83eb6d2e-0b95-4ba8-b7b2-e9ff5172522f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [83eb6d2e-0b95-4ba8-b7b2-e9ff5172522f] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 17.0129123s
I0923 11:26:11.849467   13200 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-827700 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:280: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (18.99s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-716900 --alsologtostderr -v=1]
functional_test.go:916: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-716900 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 3392: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (19.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-716900 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-716900 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-p2lnt" [7494fc30-b3b9-4a04-9e79-abfed3580e35] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-p2lnt" [7494fc30-b3b9-4a04-9e79-abfed3580e35] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 19.0354219s
functional_test.go:1646: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (19.51s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (1s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-556000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-556000
--- SKIP: TestStartStop/group/disable-driver-mounts (1.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (17.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E0923 12:32:48.058207   13200 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-716900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
panic.go:629: 
----------------------- debugLogs start: cilium-579000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-579000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-579000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-579000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-579000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-579000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-579000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-579000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-579000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-579000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-579000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-579000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-579000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-579000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-579000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-579000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-579000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-579000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-579000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-579000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-579000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-579000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-579000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-579000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-579000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-579000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-579000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-579000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-579000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-579000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-579000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt
extensions:
- extension:
last-update: Mon, 23 Sep 2024 12:31:48 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://127.0.0.1:59023
name: pause-050600
contexts:
- context:
cluster: pause-050600
extensions:
- extension:
last-update: Mon, 23 Sep 2024 12:31:48 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-050600
name: pause-050600
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-050600
user:
client-certificate: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\pause-050600\client.crt
client-key: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\pause-050600\client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-579000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-579000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579000"

                                                
                                                
----------------------- debugLogs end: cilium-579000 [took: 16.2111808s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-579000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-579000
--- SKIP: TestNetworkPlugins/group/cilium (17.08s)

                                                
                                    
Copied to clipboard