Test Report: Docker_Windows 21801

                    
                      3dc60e2e5dc0007721440fd051e7cba5635b79e7:2025-10-27:42091
                    
                

Test fail (2/344)

Order failed test Duration
58 TestErrorSpam/setup 50.89
358 TestStartStop/group/newest-cni/serial/Pause 40.33
x
+
TestErrorSpam/setup (50.89s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-570800 -n=1 --memory=3072 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-570800 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-570800 -n=1 --memory=3072 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-570800 --driver=docker: (50.8862576s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube container"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-570800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
- KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
- MINIKUBE_LOCATION=21801
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting "nospam-570800" primary control-plane node in "nospam-570800" cluster
* Pulling base image v0.0.48-1760939008-21773 ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-570800" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube container
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (50.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (40.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-791900 --alsologtostderr -v=1
E1027 20:09:10.289046   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-853800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p newest-cni-791900 --alsologtostderr -v=1: exit status 80 (9.5462696s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-791900 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 20:09:05.866189    4564 out.go:360] Setting OutFile to fd 1192 ...
	I1027 20:09:05.909819    4564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:09:05.909819    4564 out.go:374] Setting ErrFile to fd 1264...
	I1027 20:09:05.909819    4564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:09:05.920827    4564 out.go:368] Setting JSON to false
	I1027 20:09:05.920827    4564 mustload.go:65] Loading cluster: newest-cni-791900
	I1027 20:09:05.920827    4564 config.go:182] Loaded profile config "newest-cni-791900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1027 20:09:05.932820    4564 cli_runner.go:164] Run: docker container inspect newest-cni-791900 --format={{.State.Status}}
	I1027 20:09:05.985668    4564 host.go:66] Checking if "newest-cni-791900" exists ...
	I1027 20:09:05.991661    4564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-791900
	I1027 20:09:06.040669    4564 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-791900 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 20:09:06.045669    4564 out.go:179] * Pausing node newest-cni-791900 ... 
	I1027 20:09:06.051667    4564 host.go:66] Checking if "newest-cni-791900" exists ...
	I1027 20:09:06.058662    4564 ssh_runner.go:195] Run: systemctl --version
	I1027 20:09:06.063663    4564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-791900
	I1027 20:09:06.119895    4564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59316 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-791900\id_rsa Username:docker}
	I1027 20:09:06.272581    4564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:09:06.292780    4564 pause.go:52] kubelet running: true
	I1027 20:09:06.300540    4564 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 20:09:06.674018    4564 ssh_runner.go:195] Run: docker ps --filter status=running --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|istio-operator)_ --format={{.ID}}
	I1027 20:09:06.713004    4564 docker.go:501] Pausing containers: [2c5823499033 5f27e5738a2e dcc03d944fd3 25926e275c2c 3cfa46c3ed14 3a4c5dba83f4 b82b27fd319c 803d08509c21 7176c4cb6419 c1e7a89ac54a 12e30c3d2252 44e1a61e5ebe eb5cf53245aa 0b090dda40b9 65dbcd1cda7c]
	I1027 20:09:06.719500    4564 ssh_runner.go:195] Run: docker pause 2c5823499033 5f27e5738a2e dcc03d944fd3 25926e275c2c 3cfa46c3ed14 3a4c5dba83f4 b82b27fd319c 803d08509c21 7176c4cb6419 c1e7a89ac54a 12e30c3d2252 44e1a61e5ebe eb5cf53245aa 0b090dda40b9 65dbcd1cda7c
	I1027 20:09:15.041108    4564 ssh_runner.go:235] Completed: docker pause 2c5823499033 5f27e5738a2e dcc03d944fd3 25926e275c2c 3cfa46c3ed14 3a4c5dba83f4 b82b27fd319c 803d08509c21 7176c4cb6419 c1e7a89ac54a 12e30c3d2252 44e1a61e5ebe eb5cf53245aa 0b090dda40b9 65dbcd1cda7c: (8.3214945s)
	I1027 20:09:15.083128    4564 out.go:203] 
	W1027 20:09:15.134762    4564 out.go:285] X Exiting due to GUEST_PAUSE: Pause: pausing containers: docker: docker pause 2c5823499033 5f27e5738a2e dcc03d944fd3 25926e275c2c 3cfa46c3ed14 3a4c5dba83f4 b82b27fd319c 803d08509c21 7176c4cb6419 c1e7a89ac54a 12e30c3d2252 44e1a61e5ebe eb5cf53245aa 0b090dda40b9 65dbcd1cda7c: Process exited with status 1
	stdout:
	2c5823499033
	5f27e5738a2e
	dcc03d944fd3
	25926e275c2c
	3cfa46c3ed14
	3a4c5dba83f4
	b82b27fd319c
	7176c4cb6419
	c1e7a89ac54a
	12e30c3d2252
	44e1a61e5ebe
	eb5cf53245aa
	0b090dda40b9
	65dbcd1cda7c
	
	stderr:
	Error response from daemon: cannot pause container 803d08509c218a1a7c9e1f16f99beb496050e64cc7b66b46da9b22ef5193b62c: OCI runtime pause failed: unable to freeze: unknown
	
	X Exiting due to GUEST_PAUSE: Pause: pausing containers: docker: docker pause 2c5823499033 5f27e5738a2e dcc03d944fd3 25926e275c2c 3cfa46c3ed14 3a4c5dba83f4 b82b27fd319c 803d08509c21 7176c4cb6419 c1e7a89ac54a 12e30c3d2252 44e1a61e5ebe eb5cf53245aa 0b090dda40b9 65dbcd1cda7c: Process exited with status 1
	stdout:
	2c5823499033
	5f27e5738a2e
	dcc03d944fd3
	25926e275c2c
	3cfa46c3ed14
	3a4c5dba83f4
	b82b27fd319c
	7176c4cb6419
	c1e7a89ac54a
	12e30c3d2252
	44e1a61e5ebe
	eb5cf53245aa
	0b090dda40b9
	65dbcd1cda7c
	
	stderr:
	Error response from daemon: cannot pause container 803d08509c218a1a7c9e1f16f99beb496050e64cc7b66b46da9b22ef5193b62c: OCI runtime pause failed: unable to freeze: unknown
	
	W1027 20:09:15.134865    4564 out.go:285] * 
	* 
	W1027 20:09:15.233490    4564 out.go:308] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_pause_0a4d03c8adbe4992011689b475409882710ca950_5.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_pause_0a4d03c8adbe4992011689b475409882710ca950_5.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 20:09:15.282160    4564 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-windows-amd64.exe pause -p newest-cni-791900 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-791900
helpers_test.go:243: (dbg) docker inspect newest-cni-791900:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2046cef085e3c060cd8e47c103e522f9c889003a242479caa55adfb418047e95",
	        "Created": "2025-10-27T20:07:34.825694187Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 310734,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T20:08:39.023116256Z",
	            "FinishedAt": "2025-10-27T20:08:36.640410695Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/2046cef085e3c060cd8e47c103e522f9c889003a242479caa55adfb418047e95/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2046cef085e3c060cd8e47c103e522f9c889003a242479caa55adfb418047e95/hostname",
	        "HostsPath": "/var/lib/docker/containers/2046cef085e3c060cd8e47c103e522f9c889003a242479caa55adfb418047e95/hosts",
	        "LogPath": "/var/lib/docker/containers/2046cef085e3c060cd8e47c103e522f9c889003a242479caa55adfb418047e95/2046cef085e3c060cd8e47c103e522f9c889003a242479caa55adfb418047e95-json.log",
	        "Name": "/newest-cni-791900",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-791900:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-791900",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9aa63b6849b5852197b639a2ad68499ec2908c75c965a46c026bc42a00fa727d-init/diff:/var/lib/docker/overlay2/f5981ab6bccf9778a1137884da3b6053ae71c5892b008b6ad4dbe508a3a06fc6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9aa63b6849b5852197b639a2ad68499ec2908c75c965a46c026bc42a00fa727d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9aa63b6849b5852197b639a2ad68499ec2908c75c965a46c026bc42a00fa727d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9aa63b6849b5852197b639a2ad68499ec2908c75c965a46c026bc42a00fa727d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-791900",
	                "Source": "/var/lib/docker/volumes/newest-cni-791900/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-791900",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-791900",
	                "name.minikube.sigs.k8s.io": "newest-cni-791900",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dcd5659e68a4d2a0c820d5ad01b8a0a29a12c1460a1e2587e9f6880902970b09",
	            "SandboxKey": "/var/run/docker/netns/dcd5659e68a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59316"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59317"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59318"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59319"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59315"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-791900": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c739e715394ae9a91d51c64d75bfcfe043c33d280180ca7ed1a6c4cbb2ab288e",
	                    "EndpointID": "9e31e592ee133630da93e4e397b06782e2a8b11d98c7e1c90ace22d2820b47b4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-791900",
	                        "2046cef085e3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-791900 -n newest-cni-791900
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-791900 -n newest-cni-791900: exit status 2 (656.7033ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-791900 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-791900 logs -n 25: (15.1389468s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                        ARGS                                                                                                         │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-892000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                             │ default-k8s-diff-port-892000 │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:07 UTC │ 27 Oct 25 20:07 UTC │
	│ addons  │ enable dashboard -p embed-certs-036500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                       │ embed-certs-036500           │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:07 UTC │ 27 Oct 25 20:07 UTC │
	│ start   │ -p default-k8s-diff-port-892000 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-892000 │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:07 UTC │ 27 Oct 25 20:08 UTC │
	│ start   │ -p embed-certs-036500 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.1                                                                                        │ embed-certs-036500           │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:07 UTC │ 27 Oct 25 20:08 UTC │
	│ start   │ -p kubernetes-upgrade-419600 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker                                                                                                                             │ kubernetes-upgrade-419600    │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │                     │
	│ start   │ -p kubernetes-upgrade-419600 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker                                                                                                      │ kubernetes-upgrade-419600    │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:09 UTC │
	│ addons  │ enable metrics-server -p newest-cni-791900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                             │ newest-cni-791900            │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ stop    │ -p newest-cni-791900 --alsologtostderr -v=3                                                                                                                                                                         │ newest-cni-791900            │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ addons  │ enable dashboard -p newest-cni-791900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                        │ newest-cni-791900            │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ start   │ -p newest-cni-791900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.34.1 │ newest-cni-791900            │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:09 UTC │
	│ image   │ default-k8s-diff-port-892000 image list --format=json                                                                                                                                                               │ default-k8s-diff-port-892000 │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ pause   │ -p default-k8s-diff-port-892000 --alsologtostderr -v=1                                                                                                                                                              │ default-k8s-diff-port-892000 │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ image   │ embed-certs-036500 image list --format=json                                                                                                                                                                         │ embed-certs-036500           │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ pause   │ -p embed-certs-036500 --alsologtostderr -v=1                                                                                                                                                                        │ embed-certs-036500           │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ unpause │ -p default-k8s-diff-port-892000 --alsologtostderr -v=1                                                                                                                                                              │ default-k8s-diff-port-892000 │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ unpause │ -p embed-certs-036500 --alsologtostderr -v=1                                                                                                                                                                        │ embed-certs-036500           │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ delete  │ -p default-k8s-diff-port-892000                                                                                                                                                                                     │ default-k8s-diff-port-892000 │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ delete  │ -p embed-certs-036500                                                                                                                                                                                               │ embed-certs-036500           │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ delete  │ -p default-k8s-diff-port-892000                                                                                                                                                                                     │ default-k8s-diff-port-892000 │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ start   │ -p auto-938400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker                                                                                                                       │ auto-938400                  │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │                     │
	│ delete  │ -p embed-certs-036500                                                                                                                                                                                               │ embed-certs-036500           │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ start   │ -p kindnet-938400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker                                                                                                      │ kindnet-938400               │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-419600                                                                                                                                                                                        │ kubernetes-upgrade-419600    │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:09 UTC │                     │
	│ image   │ newest-cni-791900 image list --format=json                                                                                                                                                                          │ newest-cni-791900            │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:09 UTC │ 27 Oct 25 20:09 UTC │
	│ pause   │ -p newest-cni-791900 --alsologtostderr -v=1                                                                                                                                                                         │ newest-cni-791900            │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:09 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 20:08:59
	Running on machine: minikube4
	Binary: Built with gc go1.24.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 20:08:59.139459    2576 out.go:360] Setting OutFile to fd 1480 ...
	I1027 20:08:59.189816    2576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:08:59.189816    2576 out.go:374] Setting ErrFile to fd 1472...
	I1027 20:08:59.189816    2576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:08:59.210965    2576 out.go:368] Setting JSON to false
	I1027 20:08:59.220282    2576 start.go:131] hostinfo: {"hostname":"minikube4","uptime":4989,"bootTime":1761590749,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6456 Build 19045.6456","kernelVersion":"10.0.19045.6456 Build 19045.6456","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1027 20:08:59.220495    2576 start.go:139] gopshost.Virtualization returned error: not implemented yet
	I1027 20:08:59.224654    2576 out.go:179] * [kindnet-938400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
	I1027 20:08:59.227968    2576 notify.go:220] Checking for updates...
	I1027 20:08:59.229953    2576 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1027 20:08:59.232962    2576 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 20:08:59.234970    2576 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1027 20:08:59.236966    2576 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 20:08:59.239970    2576 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 20:08:59.242967    2576 config.go:182] Loaded profile config "auto-938400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1027 20:08:59.243969    2576 config.go:182] Loaded profile config "kubernetes-upgrade-419600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1027 20:08:59.243969    2576 config.go:182] Loaded profile config "newest-cni-791900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1027 20:08:59.243969    2576 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 20:08:59.394227    2576 docker.go:123] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1027 20:08:59.403488    2576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:08:59.710651    2576 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:80 OomKillDisable:true NGoroutines:92 SystemTime:2025-10-27 20:08:59.645603702 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1027 20:08:59.984920    2576 out.go:179] * Using the docker driver based on user configuration
	I1027 20:08:59.991620    2576 start.go:305] selected driver: docker
	I1027 20:08:59.991620    2576 start.go:925] validating driver "docker" against <nil>
	I1027 20:08:59.991620    2576 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 20:09:00.049758    2576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:09:00.342200    2576 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:82 OomKillDisable:true NGoroutines:92 SystemTime:2025-10-27 20:09:00.317208487 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1027 20:09:00.342200    2576 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 20:09:00.343197    2576 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:09:00.348219    2576 out.go:179] * Using Docker Desktop driver with root privileges
	I1027 20:09:00.354194    2576 cni.go:84] Creating CNI manager for "kindnet"
	I1027 20:09:00.354194    2576 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 20:09:00.354194    2576 start.go:349] cluster config:
	{Name:kindnet-938400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-938400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:09:00.363187    2576 out.go:179] * Starting "kindnet-938400" primary control-plane node in "kindnet-938400" cluster
	I1027 20:09:00.366187    2576 cache.go:123] Beginning downloading kic base image for docker with docker
	I1027 20:09:00.372193    2576 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 20:09:00.377188    2576 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 20:09:00.377188    2576 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1027 20:09:00.377188    2576 preload.go:198] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
	I1027 20:09:00.377188    2576 cache.go:58] Caching tarball of preloaded images
	I1027 20:09:00.378187    2576 preload.go:233] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1027 20:09:00.378187    2576 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1027 20:09:00.378187    2576 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-938400\config.json ...
	I1027 20:09:00.378187    2576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-938400\config.json: {Name:mk30caa1c0d6628a97c825f9bf45565fc9d84a2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:09:00.457835    2576 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 20:09:00.457835    2576 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 20:09:00.457835    2576 cache.go:232] Successfully downloaded all kic artifacts
	I1027 20:09:00.458826    2576 start.go:360] acquireMachinesLock for kindnet-938400: {Name:mkc070983665e3b89c229577eedca59c9d801a11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:09:00.458826    2576 start.go:364] duration metric: took 0s to acquireMachinesLock for "kindnet-938400"
	I1027 20:09:00.458826    2576 start.go:93] Provisioning new machine with config: &{Name:kindnet-938400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-938400 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1027 20:09:00.458826    2576 start.go:125] createHost starting for "" (driver="docker")
	I1027 20:08:57.777654    7120 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-419600 --format={{.State.Status}}
	I1027 20:08:57.779994    7120 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:08:57.779994    7120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 20:08:57.787726    7120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-419600
	I1027 20:08:57.837723    7120 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 20:08:57.837723    7120 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 20:08:57.844709    7120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58501 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-419600\id_rsa Username:docker}
	I1027 20:08:57.846719    7120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-419600
	I1027 20:08:57.902720    7120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58501 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-419600\id_rsa Username:docker}
	I1027 20:08:58.405714    7120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 20:08:58.505514    7120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:08:58.703929    7120 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:08:58.657635   10460 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 20:08:58.658632   10460 start.go:159] libmachine.API.Create for "auto-938400" (driver="docker")
	I1027 20:08:58.658632   10460 client.go:168] LocalClient.Create starting
	I1027 20:08:58.658632   10460 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1027 20:08:58.658632   10460 main.go:141] libmachine: Decoding PEM data...
	I1027 20:08:58.658632   10460 main.go:141] libmachine: Parsing certificate...
	I1027 20:08:58.658632   10460 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1027 20:08:58.658632   10460 main.go:141] libmachine: Decoding PEM data...
	I1027 20:08:58.658632   10460 main.go:141] libmachine: Parsing certificate...
	I1027 20:08:58.665627   10460 cli_runner.go:164] Run: docker network inspect auto-938400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 20:08:58.734109   10460 cli_runner.go:211] docker network inspect auto-938400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 20:08:58.739108   10460 network_create.go:284] running [docker network inspect auto-938400] to gather additional debugging logs...
	I1027 20:08:58.739108   10460 cli_runner.go:164] Run: docker network inspect auto-938400
	W1027 20:08:58.790459   10460 cli_runner.go:211] docker network inspect auto-938400 returned with exit code 1
	I1027 20:08:58.790459   10460 network_create.go:287] error running [docker network inspect auto-938400]: docker network inspect auto-938400: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-938400 not found
	I1027 20:08:58.790459   10460 network_create.go:289] output of [docker network inspect auto-938400]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-938400 not found
	
	** /stderr **
	I1027 20:08:58.801455   10460 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 20:08:58.904462   10460 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1027 20:08:58.935523   10460 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1027 20:08:58.949524   10460 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000eff740}
	I1027 20:08:58.949524   10460 network_create.go:124] attempt to create docker network auto-938400 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1027 20:08:58.954526   10460 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-938400 auto-938400
	W1027 20:08:59.018913   10460 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-938400 auto-938400 returned with exit code 1
	W1027 20:08:59.019043   10460 network_create.go:149] failed to create docker network auto-938400 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-938400 auto-938400: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1027 20:08:59.019043   10460 network_create.go:116] failed to create docker network auto-938400 192.168.67.0/24, will retry: subnet is taken
	I1027 20:08:59.045379   10460 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1027 20:08:59.058382   10460 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018e3a40}
	I1027 20:08:59.058382   10460 network_create.go:124] attempt to create docker network auto-938400 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1027 20:08:59.064389   10460 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-938400 auto-938400
	W1027 20:08:59.133458   10460 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-938400 auto-938400 returned with exit code 1
	W1027 20:08:59.133458   10460 network_create.go:149] failed to create docker network auto-938400 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-938400 auto-938400: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1027 20:08:59.133458   10460 network_create.go:116] failed to create docker network auto-938400 192.168.76.0/24, will retry: subnet is taken
	I1027 20:08:59.155461   10460 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1027 20:08:59.169457   10460 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017c70e0}
	I1027 20:08:59.169457   10460 network_create.go:124] attempt to create docker network auto-938400 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1027 20:08:59.175455   10460 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-938400 auto-938400
	I1027 20:08:59.359979   10460 network_create.go:108] docker network auto-938400 192.168.85.0/24 created
	I1027 20:08:59.360062   10460 kic.go:121] calculated static IP "192.168.85.2" for the "auto-938400" container
	I1027 20:08:59.378138   10460 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 20:08:59.441481   10460 cli_runner.go:164] Run: docker volume create auto-938400 --label name.minikube.sigs.k8s.io=auto-938400 --label created_by.minikube.sigs.k8s.io=true
	I1027 20:08:59.537741   10460 oci.go:103] Successfully created a docker volume auto-938400
	I1027 20:08:59.543731   10460 cli_runner.go:164] Run: docker run --rm --name auto-938400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-938400 --entrypoint /usr/bin/test -v auto-938400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 20:09:01.448767   10460 cli_runner.go:217] Completed: docker run --rm --name auto-938400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-938400 --entrypoint /usr/bin/test -v auto-938400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (1.9050099s)
	I1027 20:09:01.448767   10460 oci.go:107] Successfully prepared a docker volume auto-938400
	I1027 20:09:01.448767   10460 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1027 20:09:01.448767   10460 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 20:09:01.454762   10460 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-938400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 20:09:01.610845    8232 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.2989882s)
	I1027 20:09:01.610845    8232 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.2144506s)
	I1027 20:09:01.610845    8232 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.3049723s)
	I1027 20:09:01.617286    8232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-791900
	I1027 20:09:01.668287    8232 api_server.go:52] waiting for apiserver process to appear ...
	I1027 20:09:01.675288    8232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:09:02.010611    8232 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.9856507s)
	I1027 20:09:02.010611    8232 addons.go:479] Verifying addon metrics-server=true in "newest-cni-791900"
	I1027 20:09:02.709325    8232 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.7881818s)
	I1027 20:09:02.709325    8232 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.03397s)
	I1027 20:09:02.709393    8232 api_server.go:72] duration metric: took 10.0989666s to wait for apiserver process to appear ...
	I1027 20:09:02.709393    8232 api_server.go:88] waiting for apiserver healthz status ...
	I1027 20:09:02.709462    8232 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59315/healthz ...
	I1027 20:09:02.712218    8232 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-791900 addons enable metrics-server
	
	I1027 20:09:02.725390    8232 api_server.go:279] https://127.0.0.1:59315/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 20:09:02.725458    8232 api_server.go:103] status: https://127.0.0.1:59315/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 20:09:02.773583    8232 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1027 20:09:02.798728    8232 addons.go:514] duration metric: took 10.1883004s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1027 20:09:02.420881    7120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.0151118s)
	I1027 20:09:03.120878    7120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.6153003s)
	I1027 20:09:03.120878    7120 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.4168885s)
	I1027 20:09:03.123876    7120 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1027 20:09:03.126889    7120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-419600
	I1027 20:09:03.130893    7120 addons.go:514] duration metric: took 5.4441814s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1027 20:09:03.185894    7120 api_server.go:52] waiting for apiserver process to appear ...
	I1027 20:09:03.199907    7120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:09:03.309993    7120 api_server.go:72] duration metric: took 5.623279s to wait for apiserver process to appear ...
	I1027 20:09:03.310088    7120 api_server.go:88] waiting for apiserver healthz status ...
	I1027 20:09:03.310127    7120 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58505/healthz ...
	I1027 20:09:03.323010    7120 api_server.go:279] https://127.0.0.1:58505/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 20:09:03.323010    7120 api_server.go:103] status: https://127.0.0.1:58505/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 20:09:03.810700    7120 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58505/healthz ...
	I1027 20:09:03.819708    7120 api_server.go:279] https://127.0.0.1:58505/healthz returned 200:
	ok
	I1027 20:09:03.822706    7120 api_server.go:141] control plane version: v1.34.1
	I1027 20:09:03.822706    7120 api_server.go:131] duration metric: took 512.6115ms to wait for apiserver health ...
	I1027 20:09:03.822706    7120 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 20:09:03.834710    7120 system_pods.go:59] 8 kube-system pods found
	I1027 20:09:03.834710    7120 system_pods.go:61] "coredns-66bc5c9577-br5nb" [e6324072-0615-46a0-b65c-87752047855b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:09:03.834710    7120 system_pods.go:61] "coredns-66bc5c9577-xgzwh" [16c976cb-c796-4a42-a7a5-03adba73f3e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:09:03.834710    7120 system_pods.go:61] "etcd-kubernetes-upgrade-419600" [23fd95fd-3a31-4332-8fc6-299482309d9e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 20:09:03.834710    7120 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-419600" [7239e0ac-b60b-4553-8972-04a8364133ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 20:09:03.834710    7120 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-419600" [639d02ae-2d8b-4b29-a9e8-40f59cc06563] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 20:09:03.834710    7120 system_pods.go:61] "kube-proxy-vsq42" [bbeba610-eb62-4c9c-953f-f0ecaee09bf7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 20:09:03.834710    7120 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-419600" [69b9d53d-2c1f-4c74-b86c-9170d0181ac0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 20:09:03.834710    7120 system_pods.go:61] "storage-provisioner" [6156629c-236e-4511-bb98-f5b96a8bd3ff] Running
	I1027 20:09:03.834710    7120 system_pods.go:74] duration metric: took 12.0029ms to wait for pod list to return data ...
	I1027 20:09:03.834710    7120 kubeadm.go:586] duration metric: took 6.1479886s to wait for: map[apiserver:true system_pods:true]
	I1027 20:09:03.834710    7120 node_conditions.go:102] verifying NodePressure condition ...
	I1027 20:09:03.839711    7120 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1027 20:09:03.839711    7120 node_conditions.go:123] node cpu capacity is 16
	I1027 20:09:03.839711    7120 node_conditions.go:105] duration metric: took 5.0018ms to run NodePressure ...
	I1027 20:09:03.839711    7120 start.go:241] waiting for startup goroutines ...
	I1027 20:09:03.839711    7120 start.go:246] waiting for cluster config update ...
	I1027 20:09:03.839711    7120 start.go:255] writing updated cluster config ...
	I1027 20:09:03.848703    7120 ssh_runner.go:195] Run: rm -f paused
	I1027 20:09:03.957765    7120 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 20:09:03.960523    7120 out.go:179] * Done! kubectl is now configured to use "kubernetes-upgrade-419600" cluster and "default" namespace by default
	I1027 20:09:00.462852    2576 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 20:09:00.462852    2576 start.go:159] libmachine.API.Create for "kindnet-938400" (driver="docker")
	I1027 20:09:00.462852    2576 client.go:168] LocalClient.Create starting
	I1027 20:09:00.463824    2576 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1027 20:09:00.463824    2576 main.go:141] libmachine: Decoding PEM data...
	I1027 20:09:00.463824    2576 main.go:141] libmachine: Parsing certificate...
	I1027 20:09:00.463824    2576 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1027 20:09:00.463824    2576 main.go:141] libmachine: Decoding PEM data...
	I1027 20:09:00.463824    2576 main.go:141] libmachine: Parsing certificate...
	I1027 20:09:00.469819    2576 cli_runner.go:164] Run: docker network inspect kindnet-938400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 20:09:00.531827    2576 cli_runner.go:211] docker network inspect kindnet-938400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 20:09:00.537832    2576 network_create.go:284] running [docker network inspect kindnet-938400] to gather additional debugging logs...
	I1027 20:09:00.537832    2576 cli_runner.go:164] Run: docker network inspect kindnet-938400
	W1027 20:09:00.587834    2576 cli_runner.go:211] docker network inspect kindnet-938400 returned with exit code 1
	I1027 20:09:00.587834    2576 network_create.go:287] error running [docker network inspect kindnet-938400]: docker network inspect kindnet-938400: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-938400 not found
	I1027 20:09:00.587834    2576 network_create.go:289] output of [docker network inspect kindnet-938400]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-938400 not found
	
	** /stderr **
	I1027 20:09:00.594827    2576 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 20:09:00.664822    2576 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1027 20:09:00.680821    2576 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1027 20:09:00.713127    2576 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1027 20:09:00.743890    2576 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1027 20:09:00.759424    2576 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1027 20:09:00.772134    2576 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001724b40}
	I1027 20:09:00.772134    2576 network_create.go:124] attempt to create docker network kindnet-938400 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1027 20:09:00.777291    2576 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-938400 kindnet-938400
	W1027 20:09:00.837800    2576 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-938400 kindnet-938400 returned with exit code 1
	W1027 20:09:00.838372    2576 network_create.go:149] failed to create docker network kindnet-938400 192.168.94.0/24 with gateway 192.168.94.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-938400 kindnet-938400: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1027 20:09:00.838372    2576 network_create.go:116] failed to create docker network kindnet-938400 192.168.94.0/24, will retry: subnet is taken
	I1027 20:09:00.866585    2576 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1027 20:09:00.880168    2576 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0019575c0}
	I1027 20:09:00.880168    2576 network_create.go:124] attempt to create docker network kindnet-938400 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1027 20:09:00.890182    2576 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-938400 kindnet-938400
	I1027 20:09:01.238619    2576 network_create.go:108] docker network kindnet-938400 192.168.103.0/24 created
	I1027 20:09:01.238712    2576 kic.go:121] calculated static IP "192.168.103.2" for the "kindnet-938400" container
	I1027 20:09:01.257470    2576 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 20:09:01.323164    2576 cli_runner.go:164] Run: docker volume create kindnet-938400 --label name.minikube.sigs.k8s.io=kindnet-938400 --label created_by.minikube.sigs.k8s.io=true
	I1027 20:09:01.380181    2576 oci.go:103] Successfully created a docker volume kindnet-938400
	I1027 20:09:01.389174    2576 cli_runner.go:164] Run: docker run --rm --name kindnet-938400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-938400 --entrypoint /usr/bin/test -v kindnet-938400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 20:09:03.174889    2576 cli_runner.go:217] Completed: docker run --rm --name kindnet-938400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-938400 --entrypoint /usr/bin/test -v kindnet-938400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (1.7856906s)
	I1027 20:09:03.174889    2576 oci.go:107] Successfully prepared a docker volume kindnet-938400
	I1027 20:09:03.174889    2576 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1027 20:09:03.174889    2576 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 20:09:03.183895    2576 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-938400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 20:09:03.209891    8232 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59315/healthz ...
	I1027 20:09:03.220910    8232 api_server.go:279] https://127.0.0.1:59315/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 20:09:03.220910    8232 api_server.go:103] status: https://127.0.0.1:59315/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 20:09:03.710123    8232 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59315/healthz ...
	I1027 20:09:03.720121    8232 api_server.go:279] https://127.0.0.1:59315/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 20:09:03.720121    8232 api_server.go:103] status: https://127.0.0.1:59315/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 20:09:04.210096    8232 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59315/healthz ...
	I1027 20:09:04.223088    8232 api_server.go:279] https://127.0.0.1:59315/healthz returned 200:
	ok
	I1027 20:09:04.238107    8232 api_server.go:141] control plane version: v1.34.1
	I1027 20:09:04.238107    8232 api_server.go:131] duration metric: took 1.5286929s to wait for apiserver health ...
	I1027 20:09:04.238107    8232 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 20:09:04.245099    8232 system_pods.go:59] 8 kube-system pods found
	I1027 20:09:04.246094    8232 system_pods.go:61] "coredns-66bc5c9577-9dn89" [afa5c2f7-22ca-40e5-8e1c-e26e9d521c7a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:09:04.246094    8232 system_pods.go:61] "etcd-newest-cni-791900" [82097e09-f49f-407b-a378-c76b1c82a2de] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 20:09:04.246094    8232 system_pods.go:61] "kube-apiserver-newest-cni-791900" [c189df89-2350-49fc-80d1-a477100c4b1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 20:09:04.246094    8232 system_pods.go:61] "kube-controller-manager-newest-cni-791900" [d592a2a2-1238-4815-86dc-41eef0a74b9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 20:09:04.246094    8232 system_pods.go:61] "kube-proxy-vbg5n" [ee68ca38-17ef-44ad-b834-acbd164ef7d6] Running
	I1027 20:09:04.246094    8232 system_pods.go:61] "kube-scheduler-newest-cni-791900" [7256c69c-21dc-4311-9eb3-aa3712eb9554] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 20:09:04.246094    8232 system_pods.go:61] "metrics-server-746fcd58dc-2jmdk" [30732149-b3b4-43dc-80bc-6fd70db8a5bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 20:09:04.246094    8232 system_pods.go:61] "storage-provisioner" [ac6bfc17-79ef-4e20-a54a-3b65aabcba9e] Running
	I1027 20:09:04.246094    8232 system_pods.go:74] duration metric: took 7.9866ms to wait for pod list to return data ...
	I1027 20:09:04.246094    8232 default_sa.go:34] waiting for default service account to be created ...
	I1027 20:09:04.251088    8232 default_sa.go:45] found service account: "default"
	I1027 20:09:04.251088    8232 default_sa.go:55] duration metric: took 4.994ms for default service account to be created ...
	I1027 20:09:04.251088    8232 kubeadm.go:586] duration metric: took 11.6406401s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 20:09:04.251088    8232 node_conditions.go:102] verifying NodePressure condition ...
	I1027 20:09:04.258094    8232 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1027 20:09:04.258094    8232 node_conditions.go:123] node cpu capacity is 16
	I1027 20:09:04.258094    8232 node_conditions.go:105] duration metric: took 7.0059ms to run NodePressure ...
	I1027 20:09:04.258094    8232 start.go:241] waiting for startup goroutines ...
	I1027 20:09:04.258094    8232 start.go:246] waiting for cluster config update ...
	I1027 20:09:04.258094    8232 start.go:255] writing updated cluster config ...
	I1027 20:09:04.266114    8232 ssh_runner.go:195] Run: rm -f paused
	I1027 20:09:04.385090    8232 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 20:09:04.390088    8232 out.go:179] * Done! kubectl is now configured to use "newest-cni-791900" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 27 20:08:49 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Oct 27 20:08:49 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:49Z" level=info msg="Loaded network plugin cni"
	Oct 27 20:08:49 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	Oct 27 20:08:49 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	Oct 27 20:08:49 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Oct 27 20:08:49 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Oct 27 20:08:49 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:49Z" level=info msg="Start cri-dockerd grpc backend"
	Oct 27 20:08:49 newest-cni-791900 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	Oct 27 20:08:50 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-9dn89_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"76de13807fcf551d2617c742466234b72635199395deb1d895af124186ce3e61\""
	Oct 27 20:08:50 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-888vf_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"c5b4ebfa6237c7ee78ef3353889dc2c61d445b27c34734ccb43cec034b30b3fd\""
	Oct 27 20:08:51 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"metrics-server-746fcd58dc-2jmdk_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"16fda1b342195affb116a2b37e4dcbc07e36ab96396a3b0e5468ff223d0eee02\""
	Oct 27 20:08:51 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/65dbcd1cda7c040391e9eaeb44c74b595f318e352fca285c4db29b6aa5264c82/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 27 20:08:51 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0b090dda40b9ac4b2a3804afaee3130229ad8462d5372597d68622b88d4122f3/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 27 20:08:52 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eb5cf53245aabb6a58ccca9867937c8515b1f817cb50c49b60e110a784be8d3b/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 27 20:08:52 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/44e1a61e5ebe4de0b4b742ac211a3d7e64f8176e0c3f160bdfcbb4814e0cf70e/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 27 20:08:59 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:59Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.42.0.0/24,},}"
	Oct 27 20:09:02 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:09:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3a4c5dba83f407bfba9a3ac4dab1311636c96509146a1c14b3b20c8ac83a4bf8/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 27 20:09:02 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:09:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b82b27fd319c70c226be85d6a4939cc1274df82e36a88762b4d8bc4526ffccb8/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 27 20:09:02 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:09:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3cfa46c3ed14792a85b33df1894979416b5be80101e10095e1386c6733d0876d/resolv.conf as [nameserver 10.96.0.10 search kube-system.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 27 20:09:02 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:09:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/25926e275c2c6333ca79efe406c12138726e6c6b8a80cf3482ad85f830fc0c96/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 27 20:09:02 newest-cni-791900 dockerd[927]: time="2025-10-27T20:09:02.682768345Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Oct 27 20:09:02 newest-cni-791900 dockerd[927]: time="2025-10-27T20:09:02.682863054Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Oct 27 20:09:02 newest-cni-791900 dockerd[927]: time="2025-10-27T20:09:02.818174001Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Oct 27 20:09:02 newest-cni-791900 dockerd[927]: time="2025-10-27T20:09:02.818233607Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Oct 27 20:09:07 newest-cni-791900 dockerd[927]: time="2025-10-27T20:09:07.035868035Z" level=error msg="Handler for POST /v1.51/containers/803d08509c21/pause returned error: cannot pause container 803d08509c218a1a7c9e1f16f99beb496050e64cc7b66b46da9b22ef5193b62c: OCI runtime pause failed: unable to freeze: unknown"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	2c5823499033c       52546a367cc9e       16 seconds ago       Running             coredns                   1                   25926e275c2c6       coredns-66bc5c9577-9dn89                    kube-system
	5f27e5738a2e8       6e38f40d628db       16 seconds ago       Running             storage-provisioner       1                   b82b27fd319c7       storage-provisioner                         kube-system
	dcc03d944fd38       fc25172553d79       16 seconds ago       Running             kube-proxy                1                   3a4c5dba83f40       kube-proxy-vbg5n                            kube-system
	803d08509c218       5f1f5298c888d       26 seconds ago       Running             etcd                      1                   44e1a61e5ebe4       etcd-newest-cni-791900                      kube-system
	7176c4cb64190       c3994bc696102       26 seconds ago       Running             kube-apiserver            1                   eb5cf53245aab       kube-apiserver-newest-cni-791900            kube-system
	c1e7a89ac54a0       c80c8dbafe7dd       27 seconds ago       Running             kube-controller-manager   1                   0b090dda40b9a       kube-controller-manager-newest-cni-791900   kube-system
	12e30c3d2252d       7dd6aaa1717ab       27 seconds ago       Running             kube-scheduler            1                   65dbcd1cda7c0       kube-scheduler-newest-cni-791900            kube-system
	a428f291ad15d       6e38f40d628db       55 seconds ago       Exited              storage-provisioner       0                   c65ed567812e2       storage-provisioner                         kube-system
	17b84a4a42c51       52546a367cc9e       55 seconds ago       Exited              coredns                   0                   76de13807fcf5       coredns-66bc5c9577-9dn89                    kube-system
	bdb5835021a10       52546a367cc9e       55 seconds ago       Exited              coredns                   0                   c5b4ebfa6237c       coredns-66bc5c9577-888vf                    kube-system
	30075e9acc075       fc25172553d79       56 seconds ago       Exited              kube-proxy                0                   8a25a9d798473       kube-proxy-vbg5n                            kube-system
	058fb7763dd1a       7dd6aaa1717ab       About a minute ago   Exited              kube-scheduler            0                   58efc81b1d896       kube-scheduler-newest-cni-791900            kube-system
	745e4a2e15e44       c80c8dbafe7dd       About a minute ago   Exited              kube-controller-manager   0                   94e11fcbd58f7       kube-controller-manager-newest-cni-791900   kube-system
	21eeea74b8607       c3994bc696102       About a minute ago   Exited              kube-apiserver            0                   b79bfc3872fe0       kube-apiserver-newest-cni-791900            kube-system
	238eee50046ad       5f1f5298c888d       About a minute ago   Exited              etcd                      0                   648def2e45461       etcd-newest-cni-791900                      kube-system
	
	
	==> coredns [17b84a4a42c5] <==
	maxprocs: Leaving GOMAXPROCS=16: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e7e8a6c4578bf29b9f453cb54ade3fb14671793481527b7435e35119b25e84eb3a79242b1f470199f8605ace441674db8f1b6715b77448c20dde63e2dc5d2169
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:51309 - 54663 "HINFO IN 7604338953619229612.4548397862659080650. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.053408935s
	
	
	==> coredns [2c5823499033] <==
	maxprocs: Leaving GOMAXPROCS=16: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	
	
	==> coredns [bdb5835021a1] <==
	maxprocs: Leaving GOMAXPROCS=16: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	
	
	==> dmesg <==
	[ +19.675702] tmpfs: Unknown parameter 'noswap'
	[ +19.262323] tmpfs: Unknown parameter 'noswap'
	[Oct27 20:02] tmpfs: Unknown parameter 'noswap'
	[  +7.728416] tmpfs: Unknown parameter 'noswap'
	[Oct27 20:03] tmpfs: Unknown parameter 'noswap'
	[  +9.732869] tmpfs: Unknown parameter 'noswap'
	[ +27.127530] tmpfs: Unknown parameter 'noswap'
	[  +6.066550] tmpfs: Unknown parameter 'noswap'
	[Oct27 20:04] tmpfs: Unknown parameter 'noswap'
	[  +8.924087] tmpfs: Unknown parameter 'noswap'
	[Oct27 20:06] tmpfs: Unknown parameter 'noswap'
	[  +8.057177] tmpfs: Unknown parameter 'noswap'
	[  +0.556972] tmpfs: Unknown parameter 'noswap'
	[  +8.969392] tmpfs: Unknown parameter 'noswap'
	[  +0.048063] tmpfs: Unknown parameter 'noswap'
	[Oct27 20:07] tmpfs: Unknown parameter 'noswap'
	[ +34.094078] tmpfs: Unknown parameter 'noswap'
	[  +0.670795] tmpfs: Unknown parameter 'noswap'
	[  +8.459494] tmpfs: Unknown parameter 'noswap'
	[  +2.697435] tmpfs: Unknown parameter 'noswap'
	[Oct27 20:08] tmpfs: Unknown parameter 'noswap'
	[  +1.487418] tmpfs: Unknown parameter 'noswap'
	[ +41.022105] tmpfs: Unknown parameter 'noswap'
	[  +1.253626] tmpfs: Unknown parameter 'noswap'
	[  +1.096222] tmpfs: Unknown parameter 'noswap'
	
	
	==> etcd [238eee50046a] <==
	{"level":"warn","ts":"2025-10-27T20:08:21.982412Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T20:08:21.148066Z","time spent":"834.334492ms","remote":"127.0.0.1:34948","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-10-27T20:08:24.393596Z","caller":"traceutil/trace.go:172","msg":"trace[1880246924] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"103.665874ms","start":"2025-10-27T20:08:24.289909Z","end":"2025-10-27T20:08:24.393575Z","steps":["trace[1880246924] 'process raft request'  (duration: 94.983988ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T20:08:24.518192Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.215957ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-10-27T20:08:24.518575Z","caller":"traceutil/trace.go:172","msg":"trace[1259323712] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:440; }","duration":"112.439178ms","start":"2025-10-27T20:08:24.405942Z","end":"2025-10-27T20:08:24.518381Z","steps":["trace[1259323712] 'agreement among raft nodes before linearized reading'  (duration: 89.275381ms)","trace[1259323712] 'range keys from in-memory index tree'  (duration: 22.861469ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T20:08:24.518880Z","caller":"traceutil/trace.go:172","msg":"trace[1542948215] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"112.868217ms","start":"2025-10-27T20:08:24.405999Z","end":"2025-10-27T20:08:24.518867Z","steps":["trace[1542948215] 'process raft request'  (duration: 112.627895ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T20:08:24.518872Z","caller":"traceutil/trace.go:172","msg":"trace[107084076] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"114.531167ms","start":"2025-10-27T20:08:24.404314Z","end":"2025-10-27T20:08:24.518845Z","steps":["trace[107084076] 'process raft request'  (duration: 90.952832ms)","trace[107084076] 'compare'  (duration: 22.764061ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T20:08:24.518818Z","caller":"traceutil/trace.go:172","msg":"trace[2003993897] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"114.34405ms","start":"2025-10-27T20:08:24.404446Z","end":"2025-10-27T20:08:24.518790Z","steps":["trace[2003993897] 'process raft request'  (duration: 114.101628ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T20:08:25.900705Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-27T20:08:25.900781Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"newest-cni-791900","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-27T20:08:25.900962Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T20:08:32.902889Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-10-27T20:08:32.904380Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T20:08:32.904407Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-27T20:08:32.904510Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"warn","ts":"2025-10-27T20:08:32.904545Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T20:08:32.904559Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-10-27T20:08:32.904416Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-27T20:08:32.904525Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-27T20:08:32.904585Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-10-27T20:08:32.904582Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T20:08:32.904574Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-27T20:08:32.916358Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-27T20:08:32.916618Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T20:08:32.916721Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-27T20:08:32.916818Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"newest-cni-791900","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [803d08509c21] <==
	{"level":"info","ts":"2025-10-27T20:09:02.607110Z","caller":"traceutil/trace.go:172","msg":"trace[1501071185] transaction","detail":"{read_only:false; response_revision:552; number_of_response:1; }","duration":"120.318779ms","start":"2025-10-27T20:09:02.486725Z","end":"2025-10-27T20:09:02.607043Z","steps":["trace[1501071185] 'process raft request'  (duration: 96.21648ms)","trace[1501071185] 'compare'  (duration: 23.440039ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T20:09:02.818101Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.778865ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:node-proxier\" limit:1 ","response":"range_response_count:1 size:699"}
	{"level":"info","ts":"2025-10-27T20:09:02.818181Z","caller":"traceutil/trace.go:172","msg":"trace[1528217008] range","detail":"{range_begin:/registry/clusterrolebindings/system:node-proxier; range_end:; response_count:1; response_revision:557; }","duration":"115.873774ms","start":"2025-10-27T20:09:02.702285Z","end":"2025-10-27T20:09:02.818159Z","steps":["trace[1528217008] 'range keys from in-memory index tree'  (duration: 109.965235ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T20:09:02.818436Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.147451ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356249621792625 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-vbg5n.187271fe177b09f0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-vbg5n.187271fe177b09f0\" value_size:648 lease:6414984212767016582 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-10-27T20:09:02.818593Z","caller":"traceutil/trace.go:172","msg":"trace[1027241301] transaction","detail":"{read_only:false; response_revision:558; number_of_response:1; }","duration":"122.308861ms","start":"2025-10-27T20:09:02.696269Z","end":"2025-10-27T20:09:02.818578Z","steps":["trace[1027241301] 'process raft request'  (duration: 12.012296ms)","trace[1027241301] 'compare'  (duration: 110.010639ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T20:09:06.058213Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15638356249621792782,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-10-27T20:09:06.501679Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T20:09:05.524188Z","time spent":"977.408715ms","remote":"127.0.0.1:60768","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2025/10/27 20:09:06 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-10-27T20:09:06.502223Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"827.703142ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/coredns-66bc5c9577-9dn89.187271fed8625b51\" limit:1 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-10-27T20:09:06.502423Z","caller":"traceutil/trace.go:172","msg":"trace[84124662] range","detail":"{range_begin:/registry/events/kube-system/coredns-66bc5c9577-9dn89.187271fed8625b51; range_end:; }","duration":"828.579521ms","start":"2025-10-27T20:09:05.673755Z","end":"2025-10-27T20:09:06.502334Z","steps":["trace[84124662] 'agreement among raft nodes before linearized reading'  (duration: 827.661938ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T20:09:06.502723Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T20:09:05.673737Z","time spent":"828.878349ms","remote":"127.0.0.1:60516","response type":"/etcdserverpb.KV/Range","request count":0,"request size":74,"response count":0,"response size":0,"request content":"key:\"/registry/events/kube-system/coredns-66bc5c9577-9dn89.187271fed8625b51\" limit:1 "}
	2025/10/27 20:09:06 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-10-27T20:09:06.558923Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15638356249621792782,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-10-27T20:09:07.059544Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15638356249621792782,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-10-27T20:09:07.560268Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15638356249621792782,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-10-27T20:09:08.060782Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15638356249621792782,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-10-27T20:09:08.144531Z","caller":"wal/wal.go:845","msg":"slow fdatasync","took":"2.620128649s","expected-duration":"1s"}
	{"level":"info","ts":"2025-10-27T20:09:08.144995Z","caller":"traceutil/trace.go:172","msg":"trace[115043272] linearizableReadLoop","detail":"{readStateIndex:613; appliedIndex:613; }","duration":"2.58781872s","start":"2025-10-27T20:09:05.557152Z","end":"2025-10-27T20:09:08.144971Z","steps":["trace[115043272] 'read index received'  (duration: 2.587810919s)","trace[115043272] 'applied index is now lower than readState.Index'  (duration: 7µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T20:09:08.145300Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"2.588129847s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" limit:1 ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2025-10-27T20:09:08.145387Z","caller":"traceutil/trace.go:172","msg":"trace[2052982247] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/root-ca-cert-publisher; range_end:; response_count:1; response_revision:584; }","duration":"2.588179652s","start":"2025-10-27T20:09:05.557148Z","end":"2025-10-27T20:09:08.145327Z","steps":["trace[2052982247] 'agreement among raft nodes before linearized reading'  (duration: 2.587932229s)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T20:09:08.145480Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T20:09:05.557130Z","time spent":"2.588337967s","remote":"127.0.0.1:60790","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":1,"response size":232,"request content":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" limit:1 "}
	{"level":"warn","ts":"2025-10-27T20:09:08.145782Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"1.86392939s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T20:09:08.145887Z","caller":"traceutil/trace.go:172","msg":"trace[701626543] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:585; }","duration":"1.8640399s","start":"2025-10-27T20:09:06.281838Z","end":"2025-10-27T20:09:08.145878Z","steps":["trace[701626543] 'agreement among raft nodes before linearized reading'  (duration: 1.863905988s)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T20:09:08.145915Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T20:09:06.281816Z","time spent":"1.864091805s","remote":"127.0.0.1:60350","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2025-10-27T20:09:11.930098Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"395.343443ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356249621792786 > lease_revoke:<id:59069a2748ddf16f>","response":"size:28"}
	
	
	==> kernel <==
	 20:09:30 up  1:22,  0 user,  load average: 8.18, 6.26, 4.33
	Linux newest-cni-791900 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [21eeea74b860] <==
	W1027 20:08:35.211618       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.250998       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.254774       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.280674       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.292309       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.315605       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.330265       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.335953       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.375635       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.421918       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.461155       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.472596       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.475127       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.515083       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.569220       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.571812       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.589212       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.622396       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.640716       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.640800       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.658015       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.791153       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.846887       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.998263       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.998307       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [7176c4cb6419] <==
	 > logger="UnhandledError"
	I1027 20:08:59.787719       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1027 20:08:59.787759       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1027 20:08:59.789940       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1027 20:09:00.000641       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 20:09:00.001068       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 20:09:00.492177       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 20:09:00.886139       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 20:09:01.887889       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 20:09:02.083572       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 20:09:02.608422       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.34.77"}
	I1027 20:09:02.700045       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.28.43"}
	I1027 20:09:03.794701       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1027 20:09:06.500732       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1027 20:09:06.500714       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	{"level":"warn","ts":"2025-10-27T20:09:06.500186Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0011b4f00/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	{"level":"warn","ts":"2025-10-27T20:09:06.500223Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f4e780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E1027 20:09:06.502497       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1027 20:09:06.502531       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1027 20:09:06.502546       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 2.674242ms, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1027 20:09:06.502591       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 2.467724ms, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1027 20:09:06.502599       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1027 20:09:06.503783       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1027 20:09:06.504095       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.998263ms" method="PATCH" path="/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-9dn89.187271fed8625b51" result=null
	E1027 20:09:06.504333       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.231484ms" method="PATCH" path="/api/v1/namespaces/kube-system/pods/kube-controller-manager-newest-cni-791900/status" result=null
	
	
	==> kube-controller-manager [745e4a2e15e4] <==
	I1027 20:08:10.588531       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1027 20:08:10.588709       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 20:08:10.588545       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 20:08:10.588628       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 20:08:10.588519       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 20:08:10.588562       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 20:08:10.589023       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 20:08:10.589250       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-791900"
	I1027 20:08:10.589383       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 20:08:10.589786       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 20:08:10.590007       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 20:08:10.590106       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 20:08:10.590044       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 20:08:10.590056       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 20:08:10.590025       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 20:08:10.590073       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 20:08:10.590089       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 20:08:10.590065       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 20:08:10.603001       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 20:08:10.629663       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-791900" podCIDRs=["10.42.0.0/24"]
	I1027 20:08:10.661307       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 20:08:10.661386       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 20:08:10.661398       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 20:08:15.589799       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1027 20:08:24.302699       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-746fcd58dc\" failed with pods \"metrics-server-746fcd58dc-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [c1e7a89ac54a] <==
	I1027 20:09:05.110866       1 controllermanager.go:781] "Started controller" controller="node-lifecycle-controller"
	I1027 20:09:05.110994       1 node_lifecycle_controller.go:453] "Sending events to api server" logger="node-lifecycle-controller"
	I1027 20:09:05.111015       1 node_lifecycle_controller.go:464] "Starting node controller" logger="node-lifecycle-controller"
	I1027 20:09:05.111090       1 shared_informer.go:349] "Waiting for caches to sync" controller="taint"
	I1027 20:09:05.159803       1 controllermanager.go:781] "Started controller" controller="clusterrole-aggregation-controller"
	I1027 20:09:05.159932       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I1027 20:09:05.159951       1 shared_informer.go:349] "Waiting for caches to sync" controller="ClusterRoleAggregator"
	I1027 20:09:05.210294       1 controllermanager.go:781] "Started controller" controller="persistentvolume-protection-controller"
	I1027 20:09:05.210388       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="device-taint-eviction-controller" requiredFeatureGates=["DynamicResourceAllocation","DRADeviceTaints"]
	I1027 20:09:05.210410       1 controllermanager.go:744] "Warning: controller is disabled" controller="selinux-warning-controller"
	I1027 20:09:05.210428       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I1027 20:09:05.210450       1 shared_informer.go:349] "Waiting for caches to sync" controller="PV protection"
	I1027 20:09:05.263139       1 controllermanager.go:781] "Started controller" controller="pod-garbage-collector-controller"
	I1027 20:09:05.263307       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I1027 20:09:05.263377       1 shared_informer.go:349] "Waiting for caches to sync" controller="GC"
	E1027 20:09:05.418120       1 namespaced_resources_deleter.go:164] "Unhandled Error" err="unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1027 20:09:05.418345       1 controllermanager.go:781] "Started controller" controller="namespace-controller"
	I1027 20:09:05.418399       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I1027 20:09:05.418482       1 shared_informer.go:349] "Waiting for caches to sync" controller="namespace"
	I1027 20:09:05.459836       1 controllermanager.go:781] "Started controller" controller="deployment-controller"
	I1027 20:09:05.460013       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I1027 20:09:05.460026       1 shared_informer.go:349] "Waiting for caches to sync" controller="deployment"
	I1027 20:09:05.511079       1 controllermanager.go:781] "Started controller" controller="replicaset-controller"
	I1027 20:09:05.511337       1 replica_set.go:243] "Starting controller" logger="replicaset-controller" name="replicaset"
	I1027 20:09:05.511357       1 shared_informer.go:349] "Waiting for caches to sync" controller="ReplicaSet"
	
	
	==> kube-proxy [30075e9acc07] <==
	I1027 20:08:24.086393       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 20:08:24.187713       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 20:08:24.187970       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 20:08:24.188126       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 20:08:24.390441       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 20:08:24.390542       1 server_linux.go:132] "Using iptables Proxier"
	I1027 20:08:24.406408       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	E1027 20:08:24.418877       1 proxier.go:270] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv4"
	E1027 20:08:24.429626       1 proxier.go:270] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv6"
	I1027 20:08:24.429687       1 server.go:527] "Version info" version="v1.34.1"
	I1027 20:08:24.429696       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1027 20:08:24.485575       1 metrics.go:379] "failed to initialize nfacct client" err="nfacct sub-system not available"
	E1027 20:08:24.502230       1 metrics.go:379] "failed to initialize nfacct client" err="nfacct sub-system not available"
	I1027 20:08:24.512608       1 config.go:309] "Starting node config controller"
	I1027 20:08:24.512806       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 20:08:24.512818       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 20:08:24.513031       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 20:08:24.513045       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 20:08:24.513089       1 config.go:106] "Starting endpoint slice config controller"
	I1027 20:08:24.513096       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 20:08:24.513088       1 config.go:200] "Starting service config controller"
	I1027 20:08:24.513127       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 20:08:24.614172       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 20:08:24.614306       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 20:08:24.614549       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [dcc03d944fd3] <==
	I1027 20:09:03.284528       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 20:09:03.384822       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 20:09:03.384946       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 20:09:03.385201       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 20:09:03.434260       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 20:09:03.434371       1 server_linux.go:132] "Using iptables Proxier"
	I1027 20:09:03.490863       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	E1027 20:09:03.505663       1 proxier.go:270] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv4"
	E1027 20:09:03.520474       1 proxier.go:270] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv6"
	I1027 20:09:03.520576       1 server.go:527] "Version info" version="v1.34.1"
	I1027 20:09:03.520605       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1027 20:09:03.535061       1 metrics.go:379] "failed to initialize nfacct client" err="nfacct sub-system not available"
	E1027 20:09:03.551775       1 metrics.go:379] "failed to initialize nfacct client" err="nfacct sub-system not available"
	I1027 20:09:03.578748       1 config.go:200] "Starting service config controller"
	I1027 20:09:03.578761       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 20:09:03.578786       1 config.go:106] "Starting endpoint slice config controller"
	I1027 20:09:03.578790       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 20:09:03.578804       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 20:09:03.578809       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 20:09:03.579682       1 config.go:309] "Starting node config controller"
	I1027 20:09:03.579688       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 20:09:03.579694       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 20:09:03.678953       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 20:09:03.679050       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 20:09:03.682263       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [058fb7763dd1] <==
	E1027 20:08:03.009145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 20:08:03.821947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 20:08:03.873874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 20:08:03.879391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 20:08:03.894641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 20:08:03.928144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 20:08:03.966235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 20:08:03.982432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 20:08:04.011767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 20:08:04.035651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 20:08:04.051959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 20:08:04.123647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 20:08:04.247172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 20:08:04.282527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 20:08:04.325016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 20:08:04.388986       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 20:08:04.470146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 20:08:04.523094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1027 20:08:07.197307       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:08:25.896933       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:08:25.897074       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1027 20:08:25.897086       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1027 20:08:25.897124       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1027 20:08:25.897185       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1027 20:08:25.897209       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [12e30c3d2252] <==
	I1027 20:08:55.811180       1 serving.go:386] Generated self-signed cert in-memory
	W1027 20:08:58.784904       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 20:08:58.784957       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 20:08:58.784973       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 20:08:58.784985       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 20:08:58.992756       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 20:08:58.992813       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:08:58.998446       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:08:58.998469       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 20:08:58.998517       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:08:58.998586       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 20:08:59.098882       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 20:08:59 newest-cni-791900 kubelet[1486]: I1027 20:08:59.800505    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee68ca38-17ef-44ad-b834-acbd164ef7d6-lib-modules\") pod \"kube-proxy-vbg5n\" (UID: \"ee68ca38-17ef-44ad-b834-acbd164ef7d6\") " pod="kube-system/kube-proxy-vbg5n"
	Oct 27 20:08:59 newest-cni-791900 kubelet[1486]: I1027 20:08:59.800618    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ac6bfc17-79ef-4e20-a54a-3b65aabcba9e-tmp\") pod \"storage-provisioner\" (UID: \"ac6bfc17-79ef-4e20-a54a-3b65aabcba9e\") " pod="kube-system/storage-provisioner"
	Oct 27 20:09:00 newest-cni-791900 kubelet[1486]: I1027 20:09:00.183210    1486 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b696566a-ca9c-4790-aaca-5da2c8011a54-config-volume\") pod \"b696566a-ca9c-4790-aaca-5da2c8011a54\" (UID: \"b696566a-ca9c-4790-aaca-5da2c8011a54\") "
	Oct 27 20:09:00 newest-cni-791900 kubelet[1486]: I1027 20:09:00.183284    1486 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll5r8\" (UniqueName: \"kubernetes.io/projected/b696566a-ca9c-4790-aaca-5da2c8011a54-kube-api-access-ll5r8\") pod \"b696566a-ca9c-4790-aaca-5da2c8011a54\" (UID: \"b696566a-ca9c-4790-aaca-5da2c8011a54\") "
	Oct 27 20:09:00 newest-cni-791900 kubelet[1486]: I1027 20:09:00.190477    1486 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b696566a-ca9c-4790-aaca-5da2c8011a54-config-volume" (OuterVolumeSpecName: "config-volume") pod "b696566a-ca9c-4790-aaca-5da2c8011a54" (UID: "b696566a-ca9c-4790-aaca-5da2c8011a54"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Oct 27 20:09:00 newest-cni-791900 kubelet[1486]: I1027 20:09:00.199820    1486 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b696566a-ca9c-4790-aaca-5da2c8011a54-kube-api-access-ll5r8" (OuterVolumeSpecName: "kube-api-access-ll5r8") pod "b696566a-ca9c-4790-aaca-5da2c8011a54" (UID: "b696566a-ca9c-4790-aaca-5da2c8011a54"). InnerVolumeSpecName "kube-api-access-ll5r8". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 27 20:09:00 newest-cni-791900 kubelet[1486]: I1027 20:09:00.284452    1486 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b696566a-ca9c-4790-aaca-5da2c8011a54-config-volume\") on node \"newest-cni-791900\" DevicePath \"\""
	Oct 27 20:09:00 newest-cni-791900 kubelet[1486]: I1027 20:09:00.284564    1486 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ll5r8\" (UniqueName: \"kubernetes.io/projected/b696566a-ca9c-4790-aaca-5da2c8011a54-kube-api-access-ll5r8\") on node \"newest-cni-791900\" DevicePath \"\""
	Oct 27 20:09:00 newest-cni-791900 kubelet[1486]: E1027 20:09:00.996419    1486 summary_sys_containers.go:51] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Oct 27 20:09:00 newest-cni-791900 kubelet[1486]: E1027 20:09:00.996578    1486 helpers.go:860] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	Oct 27 20:09:02 newest-cni-791900 kubelet[1486]: I1027 20:09:02.092672    1486 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a4c5dba83f407bfba9a3ac4dab1311636c96509146a1c14b3b20c8ac83a4bf8"
	Oct 27 20:09:02 newest-cni-791900 kubelet[1486]: I1027 20:09:02.389339    1486 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cfa46c3ed14792a85b33df1894979416b5be80101e10095e1386c6733d0876d"
	Oct 27 20:09:02 newest-cni-791900 kubelet[1486]: I1027 20:09:02.404335    1486 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b82b27fd319c70c226be85d6a4939cc1274df82e36a88762b4d8bc4526ffccb8"
	Oct 27 20:09:02 newest-cni-791900 kubelet[1486]: I1027 20:09:02.416344    1486 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25926e275c2c6333ca79efe406c12138726e6c6b8a80cf3482ad85f830fc0c96"
	Oct 27 20:09:02 newest-cni-791900 kubelet[1486]: I1027 20:09:02.722626    1486 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b696566a-ca9c-4790-aaca-5da2c8011a54" path="/var/lib/kubelet/pods/b696566a-ca9c-4790-aaca-5da2c8011a54/volumes"
	Oct 27 20:09:02 newest-cni-791900 kubelet[1486]: E1027 20:09:02.819192    1486 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 27 20:09:02 newest-cni-791900 kubelet[1486]: E1027 20:09:02.819409    1486 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 27 20:09:02 newest-cni-791900 kubelet[1486]: E1027 20:09:02.819718    1486 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-2jmdk_kube-system(30732149-b3b4-43dc-80bc-6fd70db8a5bf): ErrImagePull: Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" logger="UnhandledError"
	Oct 27 20:09:02 newest-cni-791900 kubelet[1486]: E1027 20:09:02.819859    1486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-2jmdk" podUID="30732149-b3b4-43dc-80bc-6fd70db8a5bf"
	Oct 27 20:09:03 newest-cni-791900 kubelet[1486]: E1027 20:09:03.518614    1486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-2jmdk" podUID="30732149-b3b4-43dc-80bc-6fd70db8a5bf"
	Oct 27 20:09:04 newest-cni-791900 kubelet[1486]: E1027 20:09:04.665649    1486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-2jmdk" podUID="30732149-b3b4-43dc-80bc-6fd70db8a5bf"
	Oct 27 20:09:06 newest-cni-791900 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 20:09:06 newest-cni-791900 kubelet[1486]: I1027 20:09:06.492743    1486 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 27 20:09:06 newest-cni-791900 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 20:09:06 newest-cni-791900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> storage-provisioner [5f27e5738a2e] <==
	I1027 20:09:03.114013       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	
	
	==> storage-provisioner [a428f291ad15] <==
	I1027 20:08:24.517997       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-791900 -n newest-cni-791900
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-791900 -n newest-cni-791900: exit status 2 (755.1395ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-791900" apiserver is not running, skipping kubectl commands (state="Paused")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-791900
helpers_test.go:243: (dbg) docker inspect newest-cni-791900:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2046cef085e3c060cd8e47c103e522f9c889003a242479caa55adfb418047e95",
	        "Created": "2025-10-27T20:07:34.825694187Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 310734,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T20:08:39.023116256Z",
	            "FinishedAt": "2025-10-27T20:08:36.640410695Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/2046cef085e3c060cd8e47c103e522f9c889003a242479caa55adfb418047e95/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2046cef085e3c060cd8e47c103e522f9c889003a242479caa55adfb418047e95/hostname",
	        "HostsPath": "/var/lib/docker/containers/2046cef085e3c060cd8e47c103e522f9c889003a242479caa55adfb418047e95/hosts",
	        "LogPath": "/var/lib/docker/containers/2046cef085e3c060cd8e47c103e522f9c889003a242479caa55adfb418047e95/2046cef085e3c060cd8e47c103e522f9c889003a242479caa55adfb418047e95-json.log",
	        "Name": "/newest-cni-791900",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-791900:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-791900",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9aa63b6849b5852197b639a2ad68499ec2908c75c965a46c026bc42a00fa727d-init/diff:/var/lib/docker/overlay2/f5981ab6bccf9778a1137884da3b6053ae71c5892b008b6ad4dbe508a3a06fc6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9aa63b6849b5852197b639a2ad68499ec2908c75c965a46c026bc42a00fa727d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9aa63b6849b5852197b639a2ad68499ec2908c75c965a46c026bc42a00fa727d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9aa63b6849b5852197b639a2ad68499ec2908c75c965a46c026bc42a00fa727d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-791900",
	                "Source": "/var/lib/docker/volumes/newest-cni-791900/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-791900",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-791900",
	                "name.minikube.sigs.k8s.io": "newest-cni-791900",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dcd5659e68a4d2a0c820d5ad01b8a0a29a12c1460a1e2587e9f6880902970b09",
	            "SandboxKey": "/var/run/docker/netns/dcd5659e68a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59316"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59317"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59318"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59319"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59315"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-791900": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c739e715394ae9a91d51c64d75bfcfe043c33d280180ca7ed1a6c4cbb2ab288e",
	                    "EndpointID": "9e31e592ee133630da93e4e397b06782e2a8b11d98c7e1c90ace22d2820b47b4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-791900",
	                        "2046cef085e3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-791900 -n newest-cni-791900
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-791900 -n newest-cni-791900: exit status 2 (702.4908ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-791900 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-791900 logs -n 25: (12.3570694s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                        ARGS                                                                                                         │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable dashboard -p embed-certs-036500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                       │ embed-certs-036500           │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:07 UTC │ 27 Oct 25 20:07 UTC │
	│ start   │ -p default-k8s-diff-port-892000 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-892000 │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:07 UTC │ 27 Oct 25 20:08 UTC │
	│ start   │ -p embed-certs-036500 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.1                                                                                        │ embed-certs-036500           │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:07 UTC │ 27 Oct 25 20:08 UTC │
	│ start   │ -p kubernetes-upgrade-419600 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker                                                                                                                             │ kubernetes-upgrade-419600    │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │                     │
	│ start   │ -p kubernetes-upgrade-419600 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker                                                                                                      │ kubernetes-upgrade-419600    │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:09 UTC │
	│ addons  │ enable metrics-server -p newest-cni-791900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                             │ newest-cni-791900            │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ stop    │ -p newest-cni-791900 --alsologtostderr -v=3                                                                                                                                                                         │ newest-cni-791900            │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ addons  │ enable dashboard -p newest-cni-791900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                        │ newest-cni-791900            │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ start   │ -p newest-cni-791900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.34.1 │ newest-cni-791900            │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:09 UTC │
	│ image   │ default-k8s-diff-port-892000 image list --format=json                                                                                                                                                               │ default-k8s-diff-port-892000 │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ pause   │ -p default-k8s-diff-port-892000 --alsologtostderr -v=1                                                                                                                                                              │ default-k8s-diff-port-892000 │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ image   │ embed-certs-036500 image list --format=json                                                                                                                                                                         │ embed-certs-036500           │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ pause   │ -p embed-certs-036500 --alsologtostderr -v=1                                                                                                                                                                        │ embed-certs-036500           │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ unpause │ -p default-k8s-diff-port-892000 --alsologtostderr -v=1                                                                                                                                                              │ default-k8s-diff-port-892000 │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ unpause │ -p embed-certs-036500 --alsologtostderr -v=1                                                                                                                                                                        │ embed-certs-036500           │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ delete  │ -p default-k8s-diff-port-892000                                                                                                                                                                                     │ default-k8s-diff-port-892000 │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ delete  │ -p embed-certs-036500                                                                                                                                                                                               │ embed-certs-036500           │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ delete  │ -p default-k8s-diff-port-892000                                                                                                                                                                                     │ default-k8s-diff-port-892000 │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ start   │ -p auto-938400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker                                                                                                                       │ auto-938400                  │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │                     │
	│ delete  │ -p embed-certs-036500                                                                                                                                                                                               │ embed-certs-036500           │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	│ start   │ -p kindnet-938400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker                                                                                                      │ kindnet-938400               │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-419600                                                                                                                                                                                        │ kubernetes-upgrade-419600    │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:09 UTC │ 27 Oct 25 20:09 UTC │
	│ image   │ newest-cni-791900 image list --format=json                                                                                                                                                                          │ newest-cni-791900            │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:09 UTC │ 27 Oct 25 20:09 UTC │
	│ pause   │ -p newest-cni-791900 --alsologtostderr -v=1                                                                                                                                                                         │ newest-cni-791900            │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:09 UTC │                     │
	│ start   │ -p calico-938400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker                                                                                                        │ calico-938400                │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 20:09 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 20:09:28
	Running on machine: minikube4
	Binary: Built with gc go1.24.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 20:09:28.281579   13948 out.go:360] Setting OutFile to fd 1864 ...
	I1027 20:09:28.338958   13948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:09:28.338958   13948 out.go:374] Setting ErrFile to fd 1188...
	I1027 20:09:28.338958   13948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:09:28.360957   13948 out.go:368] Setting JSON to false
	I1027 20:09:28.365382   13948 start.go:131] hostinfo: {"hostname":"minikube4","uptime":5018,"bootTime":1761590749,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6456 Build 19045.6456","kernelVersion":"10.0.19045.6456 Build 19045.6456","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1027 20:09:28.365382   13948 start.go:139] gopshost.Virtualization returned error: not implemented yet
	I1027 20:09:28.368912   13948 out.go:179] * [calico-938400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
	I1027 20:09:28.372733   13948 notify.go:220] Checking for updates...
	I1027 20:09:28.376790   13948 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1027 20:09:28.382191   13948 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 20:09:28.387525   13948 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1027 20:09:28.392028   13948 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 20:09:28.394223   13948 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 20:09:28.397598   13948 config.go:182] Loaded profile config "auto-938400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1027 20:09:28.398227   13948 config.go:182] Loaded profile config "kindnet-938400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1027 20:09:28.398826   13948 config.go:182] Loaded profile config "newest-cni-791900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1027 20:09:28.398826   13948 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 20:09:28.576959   13948 docker.go:123] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1027 20:09:28.585533   13948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:09:28.869497   13948 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-10-27 20:09:28.849502005 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1027 20:09:28.876494   13948 out.go:179] * Using the docker driver based on user configuration
	I1027 20:09:24.355311    2576 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-938400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (21.171087s)
	I1027 20:09:24.355395    2576 kic.go:203] duration metric: took 21.1802155s to extract preloaded images to volume ...
	I1027 20:09:24.366813    2576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:09:24.694279    2576 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:true NGoroutines:88 SystemTime:2025-10-27 20:09:24.675645678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1027 20:09:24.701276    2576 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 20:09:25.005628    2576 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-938400 --name kindnet-938400 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-938400 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-938400 --network kindnet-938400 --ip 192.168.103.2 --volume kindnet-938400:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 20:09:26.017694    2576 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-938400 --name kindnet-938400 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-938400 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-938400 --network kindnet-938400 --ip 192.168.103.2 --volume kindnet-938400:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8: (1.0120522s)
	I1027 20:09:26.027332    2576 cli_runner.go:164] Run: docker container inspect kindnet-938400 --format={{.State.Running}}
	I1027 20:09:26.100124    2576 cli_runner.go:164] Run: docker container inspect kindnet-938400 --format={{.State.Status}}
	I1027 20:09:26.165129    2576 cli_runner.go:164] Run: docker exec kindnet-938400 stat /var/lib/dpkg/alternatives/iptables
	I1027 20:09:26.280132    2576 oci.go:144] the created container "kindnet-938400" has a running status.
	I1027 20:09:26.280132    2576 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-938400\id_rsa...
	I1027 20:09:26.426340    2576 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-938400\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 20:09:26.521535    2576 cli_runner.go:164] Run: docker container inspect kindnet-938400 --format={{.State.Status}}
	I1027 20:09:26.599516    2576 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 20:09:26.599516    2576 kic_runner.go:114] Args: [docker exec --privileged kindnet-938400 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 20:09:26.796183    2576 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-938400\id_rsa...
	I1027 20:09:28.880499   13948 start.go:305] selected driver: docker
	I1027 20:09:28.880499   13948 start.go:925] validating driver "docker" against <nil>
	I1027 20:09:28.880499   13948 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 20:09:28.927429   13948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:09:29.197542   13948 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-10-27 20:09:29.17568015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1027 20:09:29.198591   13948 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 20:09:29.200141   13948 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:09:29.202986   13948 out.go:179] * Using Docker Desktop driver with root privileges
	I1027 20:09:29.205210   13948 cni.go:84] Creating CNI manager for "calico"
	I1027 20:09:29.205210   13948 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1027 20:09:29.205210   13948 start.go:349] cluster config:
	{Name:calico-938400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-938400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:09:29.207748   13948 out.go:179] * Starting "calico-938400" primary control-plane node in "calico-938400" cluster
	I1027 20:09:29.210352   13948 cache.go:123] Beginning downloading kic base image for docker with docker
	I1027 20:09:29.213314   13948 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 20:09:29.215313   13948 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1027 20:09:29.215313   13948 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 20:09:29.215313   13948 preload.go:198] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
	I1027 20:09:29.215313   13948 cache.go:58] Caching tarball of preloaded images
	I1027 20:09:29.216314   13948 preload.go:233] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1027 20:09:29.216314   13948 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1027 20:09:29.216314   13948 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-938400\config.json ...
	I1027 20:09:29.216314   13948 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-938400\config.json: {Name:mkb30381e4d13216791044aa4fbad32ffdc00ca1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:09:29.290312   13948 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 20:09:29.290312   13948 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 20:09:29.290312   13948 cache.go:232] Successfully downloaded all kic artifacts
	I1027 20:09:29.291304   13948 start.go:360] acquireMachinesLock for calico-938400: {Name:mkd534df56e24d9f4777c17c2a2b004656195e47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:09:29.291304   13948 start.go:364] duration metric: took 0s to acquireMachinesLock for "calico-938400"
	I1027 20:09:29.291304   13948 start.go:93] Provisioning new machine with config: &{Name:calico-938400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-938400 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1027 20:09:29.291304   13948 start.go:125] createHost starting for "" (driver="docker")
	I1027 20:09:29.273316   10460 cli_runner.go:164] Run: docker container inspect auto-938400 --format={{.State.Status}}
	I1027 20:09:29.328310   10460 machine.go:93] provisionDockerMachine start ...
	I1027 20:09:29.333307   10460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-938400
	I1027 20:09:29.387327   10460 main.go:141] libmachine: Using SSH client type: native
	I1027 20:09:29.400314   10460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61a80] 0xd645c0 <nil>  [] 0s} 127.0.0.1 59439 <nil> <nil>}
	I1027 20:09:29.400314   10460 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 20:09:29.597316   10460 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-938400
	
	I1027 20:09:29.597316   10460 ubuntu.go:182] provisioning hostname "auto-938400"
	I1027 20:09:29.606319   10460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-938400
	I1027 20:09:29.659308   10460 main.go:141] libmachine: Using SSH client type: native
	I1027 20:09:29.659308   10460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61a80] 0xd645c0 <nil>  [] 0s} 127.0.0.1 59439 <nil> <nil>}
	I1027 20:09:29.659308   10460 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-938400 && echo "auto-938400" | sudo tee /etc/hostname
	I1027 20:09:29.860754   10460 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-938400
	
	I1027 20:09:29.868251   10460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-938400
	I1027 20:09:29.918119   10460 main.go:141] libmachine: Using SSH client type: native
	I1027 20:09:29.918119   10460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61a80] 0xd645c0 <nil>  [] 0s} 127.0.0.1 59439 <nil> <nil>}
	I1027 20:09:29.918119   10460 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-938400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-938400/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-938400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 20:09:30.088248   10460 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 20:09:30.088299   10460 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1027 20:09:30.088373   10460 ubuntu.go:190] setting up certificates
	I1027 20:09:30.088422   10460 provision.go:84] configureAuth start
	I1027 20:09:30.095304   10460 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-938400
	I1027 20:09:30.148288   10460 provision.go:143] copyHostCerts
	I1027 20:09:30.149281   10460 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1027 20:09:30.150293   10460 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1027 20:09:30.150293   10460 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1027 20:09:30.150293   10460 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1027 20:09:30.151287   10460 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1027 20:09:30.151287   10460 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1027 20:09:30.151287   10460 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1027 20:09:30.152282   10460 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.auto-938400 san=[127.0.0.1 192.168.85.2 auto-938400 localhost minikube]
	I1027 20:09:30.189844   10460 provision.go:177] copyRemoteCerts
	I1027 20:09:30.196846   10460 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 20:09:30.203838   10460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-938400
	I1027 20:09:30.257168   10460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59439 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\auto-938400\id_rsa Username:docker}
	I1027 20:09:30.398142   10460 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1027 20:09:30.436675   10460 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 20:09:30.467680   10460 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 20:09:30.496687   10460 provision.go:87] duration metric: took 408.2591ms to configureAuth
	I1027 20:09:30.496687   10460 ubuntu.go:206] setting minikube options for container-runtime
	I1027 20:09:30.497678   10460 config.go:182] Loaded profile config "auto-938400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1027 20:09:30.503685   10460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-938400
	I1027 20:09:30.555688   10460 main.go:141] libmachine: Using SSH client type: native
	I1027 20:09:30.555688   10460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61a80] 0xd645c0 <nil>  [] 0s} 127.0.0.1 59439 <nil> <nil>}
	I1027 20:09:30.556688   10460 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1027 20:09:30.816481   10460 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1027 20:09:30.816481   10460 ubuntu.go:71] root file system type: overlay
	I1027 20:09:30.816481   10460 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1027 20:09:30.822478   10460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-938400
	I1027 20:09:30.880477   10460 main.go:141] libmachine: Using SSH client type: native
	I1027 20:09:30.880477   10460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61a80] 0xd645c0 <nil>  [] 0s} 127.0.0.1 59439 <nil> <nil>}
	I1027 20:09:30.880477   10460 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1027 20:09:31.227316   10460 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1027 20:09:31.235318   10460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-938400
	I1027 20:09:31.298322   10460 main.go:141] libmachine: Using SSH client type: native
	I1027 20:09:31.298322   10460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd61a80] 0xd645c0 <nil>  [] 0s} 127.0.0.1 59439 <nil> <nil>}
	I1027 20:09:31.298322   10460 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	
	
	==> Docker <==
	Oct 27 20:08:49 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Oct 27 20:08:49 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:49Z" level=info msg="Loaded network plugin cni"
	Oct 27 20:08:49 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	Oct 27 20:08:49 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	Oct 27 20:08:49 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Oct 27 20:08:49 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Oct 27 20:08:49 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:49Z" level=info msg="Start cri-dockerd grpc backend"
	Oct 27 20:08:49 newest-cni-791900 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	Oct 27 20:08:50 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-9dn89_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"76de13807fcf551d2617c742466234b72635199395deb1d895af124186ce3e61\""
	Oct 27 20:08:50 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-888vf_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"c5b4ebfa6237c7ee78ef3353889dc2c61d445b27c34734ccb43cec034b30b3fd\""
	Oct 27 20:08:51 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"metrics-server-746fcd58dc-2jmdk_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"16fda1b342195affb116a2b37e4dcbc07e36ab96396a3b0e5468ff223d0eee02\""
	Oct 27 20:08:51 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/65dbcd1cda7c040391e9eaeb44c74b595f318e352fca285c4db29b6aa5264c82/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 27 20:08:51 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0b090dda40b9ac4b2a3804afaee3130229ad8462d5372597d68622b88d4122f3/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 27 20:08:52 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eb5cf53245aabb6a58ccca9867937c8515b1f817cb50c49b60e110a784be8d3b/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 27 20:08:52 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/44e1a61e5ebe4de0b4b742ac211a3d7e64f8176e0c3f160bdfcbb4814e0cf70e/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 27 20:08:59 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:08:59Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.42.0.0/24,},}"
	Oct 27 20:09:02 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:09:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3a4c5dba83f407bfba9a3ac4dab1311636c96509146a1c14b3b20c8ac83a4bf8/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 27 20:09:02 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:09:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b82b27fd319c70c226be85d6a4939cc1274df82e36a88762b4d8bc4526ffccb8/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 27 20:09:02 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:09:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3cfa46c3ed14792a85b33df1894979416b5be80101e10095e1386c6733d0876d/resolv.conf as [nameserver 10.96.0.10 search kube-system.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 27 20:09:02 newest-cni-791900 cri-dockerd[1255]: time="2025-10-27T20:09:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/25926e275c2c6333ca79efe406c12138726e6c6b8a80cf3482ad85f830fc0c96/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 27 20:09:02 newest-cni-791900 dockerd[927]: time="2025-10-27T20:09:02.682768345Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Oct 27 20:09:02 newest-cni-791900 dockerd[927]: time="2025-10-27T20:09:02.682863054Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Oct 27 20:09:02 newest-cni-791900 dockerd[927]: time="2025-10-27T20:09:02.818174001Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Oct 27 20:09:02 newest-cni-791900 dockerd[927]: time="2025-10-27T20:09:02.818233607Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Oct 27 20:09:07 newest-cni-791900 dockerd[927]: time="2025-10-27T20:09:07.035868035Z" level=error msg="Handler for POST /v1.51/containers/803d08509c21/pause returned error: cannot pause container 803d08509c218a1a7c9e1f16f99beb496050e64cc7b66b46da9b22ef5193b62c: OCI runtime pause failed: unable to freeze: unknown"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	2c5823499033c       52546a367cc9e       32 seconds ago       Running             coredns                   1                   25926e275c2c6       coredns-66bc5c9577-9dn89                    kube-system
	5f27e5738a2e8       6e38f40d628db       32 seconds ago       Running             storage-provisioner       1                   b82b27fd319c7       storage-provisioner                         kube-system
	dcc03d944fd38       fc25172553d79       32 seconds ago       Running             kube-proxy                1                   3a4c5dba83f40       kube-proxy-vbg5n                            kube-system
	803d08509c218       5f1f5298c888d       42 seconds ago       Running             etcd                      1                   44e1a61e5ebe4       etcd-newest-cni-791900                      kube-system
	7176c4cb64190       c3994bc696102       42 seconds ago       Running             kube-apiserver            1                   eb5cf53245aab       kube-apiserver-newest-cni-791900            kube-system
	c1e7a89ac54a0       c80c8dbafe7dd       43 seconds ago       Running             kube-controller-manager   1                   0b090dda40b9a       kube-controller-manager-newest-cni-791900   kube-system
	12e30c3d2252d       7dd6aaa1717ab       43 seconds ago       Running             kube-scheduler            1                   65dbcd1cda7c0       kube-scheduler-newest-cni-791900            kube-system
	a428f291ad15d       6e38f40d628db       About a minute ago   Exited              storage-provisioner       0                   c65ed567812e2       storage-provisioner                         kube-system
	17b84a4a42c51       52546a367cc9e       About a minute ago   Exited              coredns                   0                   76de13807fcf5       coredns-66bc5c9577-9dn89                    kube-system
	bdb5835021a10       52546a367cc9e       About a minute ago   Exited              coredns                   0                   c5b4ebfa6237c       coredns-66bc5c9577-888vf                    kube-system
	30075e9acc075       fc25172553d79       About a minute ago   Exited              kube-proxy                0                   8a25a9d798473       kube-proxy-vbg5n                            kube-system
	058fb7763dd1a       7dd6aaa1717ab       About a minute ago   Exited              kube-scheduler            0                   58efc81b1d896       kube-scheduler-newest-cni-791900            kube-system
	745e4a2e15e44       c80c8dbafe7dd       About a minute ago   Exited              kube-controller-manager   0                   94e11fcbd58f7       kube-controller-manager-newest-cni-791900   kube-system
	21eeea74b8607       c3994bc696102       About a minute ago   Exited              kube-apiserver            0                   b79bfc3872fe0       kube-apiserver-newest-cni-791900            kube-system
	238eee50046ad       5f1f5298c888d       About a minute ago   Exited              etcd                      0                   648def2e45461       etcd-newest-cni-791900                      kube-system
	
	
	==> coredns [17b84a4a42c5] <==
	maxprocs: Leaving GOMAXPROCS=16: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e7e8a6c4578bf29b9f453cb54ade3fb14671793481527b7435e35119b25e84eb3a79242b1f470199f8605ace441674db8f1b6715b77448c20dde63e2dc5d2169
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:51309 - 54663 "HINFO IN 7604338953619229612.4548397862659080650. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.053408935s
	
	
	==> coredns [2c5823499033] <==
	maxprocs: Leaving GOMAXPROCS=16: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	
	
	==> coredns [bdb5835021a1] <==
	maxprocs: Leaving GOMAXPROCS=16: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	
	
	==> dmesg <==
	[ +19.675702] tmpfs: Unknown parameter 'noswap'
	[ +19.262323] tmpfs: Unknown parameter 'noswap'
	[Oct27 20:02] tmpfs: Unknown parameter 'noswap'
	[  +7.728416] tmpfs: Unknown parameter 'noswap'
	[Oct27 20:03] tmpfs: Unknown parameter 'noswap'
	[  +9.732869] tmpfs: Unknown parameter 'noswap'
	[ +27.127530] tmpfs: Unknown parameter 'noswap'
	[  +6.066550] tmpfs: Unknown parameter 'noswap'
	[Oct27 20:04] tmpfs: Unknown parameter 'noswap'
	[  +8.924087] tmpfs: Unknown parameter 'noswap'
	[Oct27 20:06] tmpfs: Unknown parameter 'noswap'
	[  +8.057177] tmpfs: Unknown parameter 'noswap'
	[  +0.556972] tmpfs: Unknown parameter 'noswap'
	[  +8.969392] tmpfs: Unknown parameter 'noswap'
	[  +0.048063] tmpfs: Unknown parameter 'noswap'
	[Oct27 20:07] tmpfs: Unknown parameter 'noswap'
	[ +34.094078] tmpfs: Unknown parameter 'noswap'
	[  +0.670795] tmpfs: Unknown parameter 'noswap'
	[  +8.459494] tmpfs: Unknown parameter 'noswap'
	[  +2.697435] tmpfs: Unknown parameter 'noswap'
	[Oct27 20:08] tmpfs: Unknown parameter 'noswap'
	[  +1.487418] tmpfs: Unknown parameter 'noswap'
	[ +41.022105] tmpfs: Unknown parameter 'noswap'
	[  +1.253626] tmpfs: Unknown parameter 'noswap'
	[  +1.096222] tmpfs: Unknown parameter 'noswap'
	
	
	==> etcd [238eee50046a] <==
	{"level":"warn","ts":"2025-10-27T20:08:21.982412Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T20:08:21.148066Z","time spent":"834.334492ms","remote":"127.0.0.1:34948","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-10-27T20:08:24.393596Z","caller":"traceutil/trace.go:172","msg":"trace[1880246924] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"103.665874ms","start":"2025-10-27T20:08:24.289909Z","end":"2025-10-27T20:08:24.393575Z","steps":["trace[1880246924] 'process raft request'  (duration: 94.983988ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T20:08:24.518192Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.215957ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-10-27T20:08:24.518575Z","caller":"traceutil/trace.go:172","msg":"trace[1259323712] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:440; }","duration":"112.439178ms","start":"2025-10-27T20:08:24.405942Z","end":"2025-10-27T20:08:24.518381Z","steps":["trace[1259323712] 'agreement among raft nodes before linearized reading'  (duration: 89.275381ms)","trace[1259323712] 'range keys from in-memory index tree'  (duration: 22.861469ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T20:08:24.518880Z","caller":"traceutil/trace.go:172","msg":"trace[1542948215] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"112.868217ms","start":"2025-10-27T20:08:24.405999Z","end":"2025-10-27T20:08:24.518867Z","steps":["trace[1542948215] 'process raft request'  (duration: 112.627895ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T20:08:24.518872Z","caller":"traceutil/trace.go:172","msg":"trace[107084076] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"114.531167ms","start":"2025-10-27T20:08:24.404314Z","end":"2025-10-27T20:08:24.518845Z","steps":["trace[107084076] 'process raft request'  (duration: 90.952832ms)","trace[107084076] 'compare'  (duration: 22.764061ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T20:08:24.518818Z","caller":"traceutil/trace.go:172","msg":"trace[2003993897] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"114.34405ms","start":"2025-10-27T20:08:24.404446Z","end":"2025-10-27T20:08:24.518790Z","steps":["trace[2003993897] 'process raft request'  (duration: 114.101628ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T20:08:25.900705Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-27T20:08:25.900781Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"newest-cni-791900","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-27T20:08:25.900962Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T20:08:32.902889Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-10-27T20:08:32.904380Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T20:08:32.904407Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-27T20:08:32.904510Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"warn","ts":"2025-10-27T20:08:32.904545Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T20:08:32.904559Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-10-27T20:08:32.904416Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-27T20:08:32.904525Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-27T20:08:32.904585Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-10-27T20:08:32.904582Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T20:08:32.904574Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-27T20:08:32.916358Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-27T20:08:32.916618Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T20:08:32.916721Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-27T20:08:32.916818Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"newest-cni-791900","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [803d08509c21] <==
	{"level":"info","ts":"2025-10-27T20:09:02.607110Z","caller":"traceutil/trace.go:172","msg":"trace[1501071185] transaction","detail":"{read_only:false; response_revision:552; number_of_response:1; }","duration":"120.318779ms","start":"2025-10-27T20:09:02.486725Z","end":"2025-10-27T20:09:02.607043Z","steps":["trace[1501071185] 'process raft request'  (duration: 96.21648ms)","trace[1501071185] 'compare'  (duration: 23.440039ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T20:09:02.818101Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.778865ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:node-proxier\" limit:1 ","response":"range_response_count:1 size:699"}
	{"level":"info","ts":"2025-10-27T20:09:02.818181Z","caller":"traceutil/trace.go:172","msg":"trace[1528217008] range","detail":"{range_begin:/registry/clusterrolebindings/system:node-proxier; range_end:; response_count:1; response_revision:557; }","duration":"115.873774ms","start":"2025-10-27T20:09:02.702285Z","end":"2025-10-27T20:09:02.818159Z","steps":["trace[1528217008] 'range keys from in-memory index tree'  (duration: 109.965235ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T20:09:02.818436Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.147451ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356249621792625 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-vbg5n.187271fe177b09f0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-vbg5n.187271fe177b09f0\" value_size:648 lease:6414984212767016582 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-10-27T20:09:02.818593Z","caller":"traceutil/trace.go:172","msg":"trace[1027241301] transaction","detail":"{read_only:false; response_revision:558; number_of_response:1; }","duration":"122.308861ms","start":"2025-10-27T20:09:02.696269Z","end":"2025-10-27T20:09:02.818578Z","steps":["trace[1027241301] 'process raft request'  (duration: 12.012296ms)","trace[1027241301] 'compare'  (duration: 110.010639ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T20:09:06.058213Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15638356249621792782,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-10-27T20:09:06.501679Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T20:09:05.524188Z","time spent":"977.408715ms","remote":"127.0.0.1:60768","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2025/10/27 20:09:06 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-10-27T20:09:06.502223Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"827.703142ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/coredns-66bc5c9577-9dn89.187271fed8625b51\" limit:1 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2025-10-27T20:09:06.502423Z","caller":"traceutil/trace.go:172","msg":"trace[84124662] range","detail":"{range_begin:/registry/events/kube-system/coredns-66bc5c9577-9dn89.187271fed8625b51; range_end:; }","duration":"828.579521ms","start":"2025-10-27T20:09:05.673755Z","end":"2025-10-27T20:09:06.502334Z","steps":["trace[84124662] 'agreement among raft nodes before linearized reading'  (duration: 827.661938ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T20:09:06.502723Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T20:09:05.673737Z","time spent":"828.878349ms","remote":"127.0.0.1:60516","response type":"/etcdserverpb.KV/Range","request count":0,"request size":74,"response count":0,"response size":0,"request content":"key:\"/registry/events/kube-system/coredns-66bc5c9577-9dn89.187271fed8625b51\" limit:1 "}
	2025/10/27 20:09:06 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-10-27T20:09:06.558923Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15638356249621792782,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-10-27T20:09:07.059544Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15638356249621792782,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-10-27T20:09:07.560268Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15638356249621792782,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-10-27T20:09:08.060782Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15638356249621792782,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-10-27T20:09:08.144531Z","caller":"wal/wal.go:845","msg":"slow fdatasync","took":"2.620128649s","expected-duration":"1s"}
	{"level":"info","ts":"2025-10-27T20:09:08.144995Z","caller":"traceutil/trace.go:172","msg":"trace[115043272] linearizableReadLoop","detail":"{readStateIndex:613; appliedIndex:613; }","duration":"2.58781872s","start":"2025-10-27T20:09:05.557152Z","end":"2025-10-27T20:09:08.144971Z","steps":["trace[115043272] 'read index received'  (duration: 2.587810919s)","trace[115043272] 'applied index is now lower than readState.Index'  (duration: 7µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T20:09:08.145300Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"2.588129847s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" limit:1 ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2025-10-27T20:09:08.145387Z","caller":"traceutil/trace.go:172","msg":"trace[2052982247] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/root-ca-cert-publisher; range_end:; response_count:1; response_revision:584; }","duration":"2.588179652s","start":"2025-10-27T20:09:05.557148Z","end":"2025-10-27T20:09:08.145327Z","steps":["trace[2052982247] 'agreement among raft nodes before linearized reading'  (duration: 2.587932229s)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T20:09:08.145480Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T20:09:05.557130Z","time spent":"2.588337967s","remote":"127.0.0.1:60790","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":1,"response size":232,"request content":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" limit:1 "}
	{"level":"warn","ts":"2025-10-27T20:09:08.145782Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"1.86392939s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T20:09:08.145887Z","caller":"traceutil/trace.go:172","msg":"trace[701626543] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:585; }","duration":"1.8640399s","start":"2025-10-27T20:09:06.281838Z","end":"2025-10-27T20:09:08.145878Z","steps":["trace[701626543] 'agreement among raft nodes before linearized reading'  (duration: 1.863905988s)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T20:09:08.145915Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T20:09:06.281816Z","time spent":"1.864091805s","remote":"127.0.0.1:60350","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2025-10-27T20:09:11.930098Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"395.343443ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356249621792786 > lease_revoke:<id:59069a2748ddf16f>","response":"size:28"}
	
	
	==> kernel <==
	 20:09:44 up  1:23,  0 user,  load average: 7.53, 6.21, 4.35
	Linux newest-cni-791900 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [21eeea74b860] <==
	W1027 20:08:35.211618       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.250998       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.254774       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.280674       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.292309       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.315605       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.330265       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.335953       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.375635       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.421918       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.461155       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.472596       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.475127       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.515083       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.569220       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.571812       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.589212       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.622396       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.640716       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.640800       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.658015       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.791153       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.846887       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.998263       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 20:08:35.998307       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [7176c4cb6419] <==
	 > logger="UnhandledError"
	I1027 20:08:59.787719       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1027 20:08:59.787759       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1027 20:08:59.789940       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1027 20:09:00.000641       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 20:09:00.001068       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 20:09:00.492177       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 20:09:00.886139       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 20:09:01.887889       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 20:09:02.083572       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 20:09:02.608422       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.34.77"}
	I1027 20:09:02.700045       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.28.43"}
	I1027 20:09:03.794701       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1027 20:09:06.500732       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1027 20:09:06.500714       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	{"level":"warn","ts":"2025-10-27T20:09:06.500186Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0011b4f00/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	{"level":"warn","ts":"2025-10-27T20:09:06.500223Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f4e780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E1027 20:09:06.502497       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1027 20:09:06.502531       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1027 20:09:06.502546       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 2.674242ms, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1027 20:09:06.502591       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 2.467724ms, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1027 20:09:06.502599       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1027 20:09:06.503783       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1027 20:09:06.504095       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.998263ms" method="PATCH" path="/api/v1/namespaces/kube-system/events/coredns-66bc5c9577-9dn89.187271fed8625b51" result=null
	E1027 20:09:06.504333       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.231484ms" method="PATCH" path="/api/v1/namespaces/kube-system/pods/kube-controller-manager-newest-cni-791900/status" result=null
	
	
	==> kube-controller-manager [745e4a2e15e4] <==
	I1027 20:08:10.588531       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1027 20:08:10.588709       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 20:08:10.588545       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 20:08:10.588628       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 20:08:10.588519       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 20:08:10.588562       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 20:08:10.589023       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 20:08:10.589250       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-791900"
	I1027 20:08:10.589383       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 20:08:10.589786       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 20:08:10.590007       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 20:08:10.590106       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 20:08:10.590044       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 20:08:10.590056       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 20:08:10.590025       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 20:08:10.590073       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 20:08:10.590089       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 20:08:10.590065       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 20:08:10.603001       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 20:08:10.629663       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-791900" podCIDRs=["10.42.0.0/24"]
	I1027 20:08:10.661307       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 20:08:10.661386       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 20:08:10.661398       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 20:08:15.589799       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1027 20:08:24.302699       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-746fcd58dc\" failed with pods \"metrics-server-746fcd58dc-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [c1e7a89ac54a] <==
	I1027 20:09:05.110866       1 controllermanager.go:781] "Started controller" controller="node-lifecycle-controller"
	I1027 20:09:05.110994       1 node_lifecycle_controller.go:453] "Sending events to api server" logger="node-lifecycle-controller"
	I1027 20:09:05.111015       1 node_lifecycle_controller.go:464] "Starting node controller" logger="node-lifecycle-controller"
	I1027 20:09:05.111090       1 shared_informer.go:349] "Waiting for caches to sync" controller="taint"
	I1027 20:09:05.159803       1 controllermanager.go:781] "Started controller" controller="clusterrole-aggregation-controller"
	I1027 20:09:05.159932       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I1027 20:09:05.159951       1 shared_informer.go:349] "Waiting for caches to sync" controller="ClusterRoleAggregator"
	I1027 20:09:05.210294       1 controllermanager.go:781] "Started controller" controller="persistentvolume-protection-controller"
	I1027 20:09:05.210388       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="device-taint-eviction-controller" requiredFeatureGates=["DynamicResourceAllocation","DRADeviceTaints"]
	I1027 20:09:05.210410       1 controllermanager.go:744] "Warning: controller is disabled" controller="selinux-warning-controller"
	I1027 20:09:05.210428       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I1027 20:09:05.210450       1 shared_informer.go:349] "Waiting for caches to sync" controller="PV protection"
	I1027 20:09:05.263139       1 controllermanager.go:781] "Started controller" controller="pod-garbage-collector-controller"
	I1027 20:09:05.263307       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I1027 20:09:05.263377       1 shared_informer.go:349] "Waiting for caches to sync" controller="GC"
	E1027 20:09:05.418120       1 namespaced_resources_deleter.go:164] "Unhandled Error" err="unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1027 20:09:05.418345       1 controllermanager.go:781] "Started controller" controller="namespace-controller"
	I1027 20:09:05.418399       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I1027 20:09:05.418482       1 shared_informer.go:349] "Waiting for caches to sync" controller="namespace"
	I1027 20:09:05.459836       1 controllermanager.go:781] "Started controller" controller="deployment-controller"
	I1027 20:09:05.460013       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I1027 20:09:05.460026       1 shared_informer.go:349] "Waiting for caches to sync" controller="deployment"
	I1027 20:09:05.511079       1 controllermanager.go:781] "Started controller" controller="replicaset-controller"
	I1027 20:09:05.511337       1 replica_set.go:243] "Starting controller" logger="replicaset-controller" name="replicaset"
	I1027 20:09:05.511357       1 shared_informer.go:349] "Waiting for caches to sync" controller="ReplicaSet"
	
	
	==> kube-proxy [30075e9acc07] <==
	I1027 20:08:24.086393       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 20:08:24.187713       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 20:08:24.187970       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 20:08:24.188126       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 20:08:24.390441       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 20:08:24.390542       1 server_linux.go:132] "Using iptables Proxier"
	I1027 20:08:24.406408       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	E1027 20:08:24.418877       1 proxier.go:270] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv4"
	E1027 20:08:24.429626       1 proxier.go:270] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv6"
	I1027 20:08:24.429687       1 server.go:527] "Version info" version="v1.34.1"
	I1027 20:08:24.429696       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1027 20:08:24.485575       1 metrics.go:379] "failed to initialize nfacct client" err="nfacct sub-system not available"
	E1027 20:08:24.502230       1 metrics.go:379] "failed to initialize nfacct client" err="nfacct sub-system not available"
	I1027 20:08:24.512608       1 config.go:309] "Starting node config controller"
	I1027 20:08:24.512806       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 20:08:24.512818       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 20:08:24.513031       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 20:08:24.513045       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 20:08:24.513089       1 config.go:106] "Starting endpoint slice config controller"
	I1027 20:08:24.513096       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 20:08:24.513088       1 config.go:200] "Starting service config controller"
	I1027 20:08:24.513127       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 20:08:24.614172       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 20:08:24.614306       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 20:08:24.614549       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [dcc03d944fd3] <==
	I1027 20:09:03.284528       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 20:09:03.384822       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 20:09:03.384946       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 20:09:03.385201       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 20:09:03.434260       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 20:09:03.434371       1 server_linux.go:132] "Using iptables Proxier"
	I1027 20:09:03.490863       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	E1027 20:09:03.505663       1 proxier.go:270] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv4"
	E1027 20:09:03.520474       1 proxier.go:270] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv6"
	I1027 20:09:03.520576       1 server.go:527] "Version info" version="v1.34.1"
	I1027 20:09:03.520605       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1027 20:09:03.535061       1 metrics.go:379] "failed to initialize nfacct client" err="nfacct sub-system not available"
	E1027 20:09:03.551775       1 metrics.go:379] "failed to initialize nfacct client" err="nfacct sub-system not available"
	I1027 20:09:03.578748       1 config.go:200] "Starting service config controller"
	I1027 20:09:03.578761       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 20:09:03.578786       1 config.go:106] "Starting endpoint slice config controller"
	I1027 20:09:03.578790       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 20:09:03.578804       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 20:09:03.578809       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 20:09:03.579682       1 config.go:309] "Starting node config controller"
	I1027 20:09:03.579688       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 20:09:03.579694       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 20:09:03.678953       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 20:09:03.679050       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 20:09:03.682263       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [058fb7763dd1] <==
	E1027 20:08:03.009145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 20:08:03.821947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 20:08:03.873874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 20:08:03.879391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 20:08:03.894641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 20:08:03.928144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 20:08:03.966235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 20:08:03.982432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 20:08:04.011767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 20:08:04.035651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 20:08:04.051959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 20:08:04.123647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 20:08:04.247172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 20:08:04.282527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 20:08:04.325016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 20:08:04.388986       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 20:08:04.470146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 20:08:04.523094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1027 20:08:07.197307       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:08:25.896933       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:08:25.897074       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1027 20:08:25.897086       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1027 20:08:25.897124       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1027 20:08:25.897185       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1027 20:08:25.897209       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [12e30c3d2252] <==
	I1027 20:08:55.811180       1 serving.go:386] Generated self-signed cert in-memory
	W1027 20:08:58.784904       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 20:08:58.784957       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 20:08:58.784973       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 20:08:58.784985       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 20:08:58.992756       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 20:08:58.992813       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:08:58.998446       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:08:58.998469       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 20:08:58.998517       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:08:58.998586       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 20:08:59.098882       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 20:08:59 newest-cni-791900 kubelet[1486]: I1027 20:08:59.800505    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee68ca38-17ef-44ad-b834-acbd164ef7d6-lib-modules\") pod \"kube-proxy-vbg5n\" (UID: \"ee68ca38-17ef-44ad-b834-acbd164ef7d6\") " pod="kube-system/kube-proxy-vbg5n"
	Oct 27 20:08:59 newest-cni-791900 kubelet[1486]: I1027 20:08:59.800618    1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ac6bfc17-79ef-4e20-a54a-3b65aabcba9e-tmp\") pod \"storage-provisioner\" (UID: \"ac6bfc17-79ef-4e20-a54a-3b65aabcba9e\") " pod="kube-system/storage-provisioner"
	Oct 27 20:09:00 newest-cni-791900 kubelet[1486]: I1027 20:09:00.183210    1486 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b696566a-ca9c-4790-aaca-5da2c8011a54-config-volume\") pod \"b696566a-ca9c-4790-aaca-5da2c8011a54\" (UID: \"b696566a-ca9c-4790-aaca-5da2c8011a54\") "
	Oct 27 20:09:00 newest-cni-791900 kubelet[1486]: I1027 20:09:00.183284    1486 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll5r8\" (UniqueName: \"kubernetes.io/projected/b696566a-ca9c-4790-aaca-5da2c8011a54-kube-api-access-ll5r8\") pod \"b696566a-ca9c-4790-aaca-5da2c8011a54\" (UID: \"b696566a-ca9c-4790-aaca-5da2c8011a54\") "
	Oct 27 20:09:00 newest-cni-791900 kubelet[1486]: I1027 20:09:00.190477    1486 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b696566a-ca9c-4790-aaca-5da2c8011a54-config-volume" (OuterVolumeSpecName: "config-volume") pod "b696566a-ca9c-4790-aaca-5da2c8011a54" (UID: "b696566a-ca9c-4790-aaca-5da2c8011a54"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Oct 27 20:09:00 newest-cni-791900 kubelet[1486]: I1027 20:09:00.199820    1486 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b696566a-ca9c-4790-aaca-5da2c8011a54-kube-api-access-ll5r8" (OuterVolumeSpecName: "kube-api-access-ll5r8") pod "b696566a-ca9c-4790-aaca-5da2c8011a54" (UID: "b696566a-ca9c-4790-aaca-5da2c8011a54"). InnerVolumeSpecName "kube-api-access-ll5r8". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 27 20:09:00 newest-cni-791900 kubelet[1486]: I1027 20:09:00.284452    1486 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b696566a-ca9c-4790-aaca-5da2c8011a54-config-volume\") on node \"newest-cni-791900\" DevicePath \"\""
	Oct 27 20:09:00 newest-cni-791900 kubelet[1486]: I1027 20:09:00.284564    1486 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ll5r8\" (UniqueName: \"kubernetes.io/projected/b696566a-ca9c-4790-aaca-5da2c8011a54-kube-api-access-ll5r8\") on node \"newest-cni-791900\" DevicePath \"\""
	Oct 27 20:09:00 newest-cni-791900 kubelet[1486]: E1027 20:09:00.996419    1486 summary_sys_containers.go:51] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Oct 27 20:09:00 newest-cni-791900 kubelet[1486]: E1027 20:09:00.996578    1486 helpers.go:860] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	Oct 27 20:09:02 newest-cni-791900 kubelet[1486]: I1027 20:09:02.092672    1486 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a4c5dba83f407bfba9a3ac4dab1311636c96509146a1c14b3b20c8ac83a4bf8"
	Oct 27 20:09:02 newest-cni-791900 kubelet[1486]: I1027 20:09:02.389339    1486 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cfa46c3ed14792a85b33df1894979416b5be80101e10095e1386c6733d0876d"
	Oct 27 20:09:02 newest-cni-791900 kubelet[1486]: I1027 20:09:02.404335    1486 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b82b27fd319c70c226be85d6a4939cc1274df82e36a88762b4d8bc4526ffccb8"
	Oct 27 20:09:02 newest-cni-791900 kubelet[1486]: I1027 20:09:02.416344    1486 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25926e275c2c6333ca79efe406c12138726e6c6b8a80cf3482ad85f830fc0c96"
	Oct 27 20:09:02 newest-cni-791900 kubelet[1486]: I1027 20:09:02.722626    1486 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b696566a-ca9c-4790-aaca-5da2c8011a54" path="/var/lib/kubelet/pods/b696566a-ca9c-4790-aaca-5da2c8011a54/volumes"
	Oct 27 20:09:02 newest-cni-791900 kubelet[1486]: E1027 20:09:02.819192    1486 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 27 20:09:02 newest-cni-791900 kubelet[1486]: E1027 20:09:02.819409    1486 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 27 20:09:02 newest-cni-791900 kubelet[1486]: E1027 20:09:02.819718    1486 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-2jmdk_kube-system(30732149-b3b4-43dc-80bc-6fd70db8a5bf): ErrImagePull: Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" logger="UnhandledError"
	Oct 27 20:09:02 newest-cni-791900 kubelet[1486]: E1027 20:09:02.819859    1486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-2jmdk" podUID="30732149-b3b4-43dc-80bc-6fd70db8a5bf"
	Oct 27 20:09:03 newest-cni-791900 kubelet[1486]: E1027 20:09:03.518614    1486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-2jmdk" podUID="30732149-b3b4-43dc-80bc-6fd70db8a5bf"
	Oct 27 20:09:04 newest-cni-791900 kubelet[1486]: E1027 20:09:04.665649    1486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-2jmdk" podUID="30732149-b3b4-43dc-80bc-6fd70db8a5bf"
	Oct 27 20:09:06 newest-cni-791900 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 20:09:06 newest-cni-791900 kubelet[1486]: I1027 20:09:06.492743    1486 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 27 20:09:06 newest-cni-791900 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 20:09:06 newest-cni-791900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> storage-provisioner [5f27e5738a2e] <==
	I1027 20:09:03.114013       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	
	
	==> storage-provisioner [a428f291ad15] <==
	I1027 20:08:24.517997       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-791900 -n newest-cni-791900
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-791900 -n newest-cni-791900: exit status 2 (755.8349ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-791900" apiserver is not running, skipping kubectl commands (state="Paused")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (40.33s)

                                                
                                    

Test pass (315/344)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.45
4 TestDownloadOnly/v1.28.0/preload-exists 0.22
7 TestDownloadOnly/v1.28.0/kubectl 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.75
9 TestDownloadOnly/v1.28.0/DeleteAll 1.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.49
12 TestDownloadOnly/v1.34.1/json-events 5.43
13 TestDownloadOnly/v1.34.1/preload-exists 0
16 TestDownloadOnly/v1.34.1/kubectl 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.65
18 TestDownloadOnly/v1.34.1/DeleteAll 0.75
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.8
20 TestDownloadOnlyKic 2.31
21 TestBinaryMirror 2.18
22 TestOffline 137.31
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.22
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.2
27 TestAddons/Setup 405.37
29 TestAddons/serial/Volcano 51.06
31 TestAddons/serial/GCPAuth/Namespaces 0.25
32 TestAddons/serial/GCPAuth/FakeCredentials 11.16
36 TestAddons/parallel/RegistryCreds 1.78
38 TestAddons/parallel/InspektorGadget 6.49
39 TestAddons/parallel/MetricsServer 8.04
41 TestAddons/parallel/CSI 49.44
42 TestAddons/parallel/Headlamp 36.25
43 TestAddons/parallel/CloudSpanner 7.05
44 TestAddons/parallel/LocalPath 58.6
45 TestAddons/parallel/NvidiaDevicePlugin 5.96
46 TestAddons/parallel/Yakd 13.81
47 TestAddons/parallel/AmdGpuDevicePlugin 7.68
48 TestAddons/StoppedEnableDisable 13.12
49 TestCertOptions 69.04
50 TestCertExpiration 286.54
51 TestDockerFlags 62.14
52 TestForceSystemdFlag 106.6
53 TestForceSystemdEnv 59.01
59 TestErrorSpam/start 2.66
60 TestErrorSpam/status 2.17
61 TestErrorSpam/pause 2.62
62 TestErrorSpam/unpause 2.65
63 TestErrorSpam/stop 18.71
66 TestFunctional/serial/CopySyncFile 0.04
67 TestFunctional/serial/StartWithProxy 83.09
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 59.02
70 TestFunctional/serial/KubeContext 0.09
71 TestFunctional/serial/KubectlGetPods 0.18
74 TestFunctional/serial/CacheCmd/cache/add_remote 10.18
75 TestFunctional/serial/CacheCmd/cache/add_local 4.31
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.2
77 TestFunctional/serial/CacheCmd/cache/list 0.2
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.65
79 TestFunctional/serial/CacheCmd/cache/cache_reload 4.56
80 TestFunctional/serial/CacheCmd/cache/delete 0.38
81 TestFunctional/serial/MinikubeKubectlCmd 0.48
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.33
83 TestFunctional/serial/ExtraConfig 58.98
84 TestFunctional/serial/ComponentHealth 0.14
85 TestFunctional/serial/LogsCmd 1.89
86 TestFunctional/serial/LogsFileCmd 1.94
87 TestFunctional/serial/InvalidService 5.37
89 TestFunctional/parallel/ConfigCmd 1.25
91 TestFunctional/parallel/DryRun 1.48
92 TestFunctional/parallel/InternationalLanguage 0.66
93 TestFunctional/parallel/StatusCmd 2.06
98 TestFunctional/parallel/AddonsCmd 1.05
99 TestFunctional/parallel/PersistentVolumeClaim 64.57
101 TestFunctional/parallel/SSHCmd 1.28
102 TestFunctional/parallel/CpCmd 3.61
103 TestFunctional/parallel/MySQL 70.42
104 TestFunctional/parallel/FileSync 0.63
105 TestFunctional/parallel/CertSync 3.79
109 TestFunctional/parallel/NodeLabels 0.14
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
113 TestFunctional/parallel/License 1.42
114 TestFunctional/parallel/Version/short 0.18
115 TestFunctional/parallel/Version/components 0.94
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.46
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.49
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.48
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.47
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.93
121 TestFunctional/parallel/ImageCommands/Setup 1.83
122 TestFunctional/parallel/DockerEnv/powershell 11.91
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.33
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.33
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.32
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.65
127 TestFunctional/parallel/ProfileCmd/profile_not_create 1.07
128 TestFunctional/parallel/ProfileCmd/profile_list 1.21
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.54
130 TestFunctional/parallel/ProfileCmd/profile_json_output 1.37
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.12
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 45.62
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.08
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.75
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.96
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.36
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.26
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.14
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.21
147 TestFunctional/parallel/ServiceCmd/DeployApp 13.48
148 TestFunctional/parallel/ServiceCmd/List 1.28
149 TestFunctional/parallel/ServiceCmd/JSONOutput 1.24
150 TestFunctional/parallel/ServiceCmd/HTTPS 15.01
151 TestFunctional/parallel/ServiceCmd/Format 15.01
152 TestFunctional/parallel/ServiceCmd/URL 15.01
153 TestFunctional/delete_echo-server_images 0.16
154 TestFunctional/delete_my-image_image 0.07
155 TestFunctional/delete_minikube_cached_images 0.07
160 TestMultiControlPlane/serial/StartCluster 228.26
161 TestMultiControlPlane/serial/DeployApp 9.12
162 TestMultiControlPlane/serial/PingHostFromPods 2.55
163 TestMultiControlPlane/serial/AddWorkerNode 58.07
164 TestMultiControlPlane/serial/NodeLabels 0.14
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 2.1
166 TestMultiControlPlane/serial/CopyFile 35.05
167 TestMultiControlPlane/serial/StopSecondaryNode 13.5
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 1.66
169 TestMultiControlPlane/serial/RestartSecondaryNode 55
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 2.22
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 197.19
172 TestMultiControlPlane/serial/DeleteSecondaryNode 14.82
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.6
174 TestMultiControlPlane/serial/StopCluster 37.18
175 TestMultiControlPlane/serial/RestartCluster 122.28
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.58
177 TestMultiControlPlane/serial/AddSecondaryNode 88.72
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 2.09
181 TestImageBuild/serial/Setup 54.64
182 TestImageBuild/serial/NormalBuild 3.6
183 TestImageBuild/serial/BuildWithBuildArg 2.41
184 TestImageBuild/serial/BuildWithDockerIgnore 1.21
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.41
190 TestJSONOutput/start/Command 90.27
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 1.17
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.92
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 12.16
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.7
215 TestKicCustomNetwork/create_custom_network 58.43
216 TestKicCustomNetwork/use_default_bridge_network 57.03
217 TestKicExistingNetwork 57.89
218 TestKicCustomSubnet 59.11
219 TestKicStaticIP 56.43
220 TestMainNoArgs 0.16
221 TestMinikubeProfile 110.98
224 TestMountStart/serial/StartWithMountFirst 14.33
225 TestMountStart/serial/VerifyMountFirst 0.59
226 TestMountStart/serial/StartWithMountSecond 14.03
227 TestMountStart/serial/VerifyMountSecond 0.58
228 TestMountStart/serial/DeleteFirst 2.51
229 TestMountStart/serial/VerifyMountPostDelete 0.58
230 TestMountStart/serial/Stop 1.88
231 TestMountStart/serial/RestartStopped 10.99
232 TestMountStart/serial/VerifyMountPostStop 0.58
235 TestMultiNode/serial/FreshStart2Nodes 137.16
236 TestMultiNode/serial/DeployApp2Nodes 7.02
237 TestMultiNode/serial/PingHostFrom2Pods 1.74
238 TestMultiNode/serial/AddNode 56.99
239 TestMultiNode/serial/MultiNodeLabels 0.13
240 TestMultiNode/serial/ProfileList 1.47
241 TestMultiNode/serial/CopyFile 20.12
242 TestMultiNode/serial/StopNode 3.98
243 TestMultiNode/serial/StartAfterStop 13.53
244 TestMultiNode/serial/RestartKeepsNodes 89.73
245 TestMultiNode/serial/DeleteNode 8.62
246 TestMultiNode/serial/StopMultiNode 23.94
247 TestMultiNode/serial/RestartMultiNode 61.58
248 TestMultiNode/serial/ValidateNameConflict 56.07
252 TestPreload 167.81
253 TestScheduledStopWindows 115.29
257 TestInsufficientStorage 32.17
258 TestRunningBinaryUpgrade 115.4
260 TestKubernetesUpgrade 439.9
261 TestMissingContainerUpgrade 137.28
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.3
264 TestStoppedBinaryUpgrade/Setup 0.81
265 TestNoKubernetes/serial/StartWithK8s 101.9
266 TestStoppedBinaryUpgrade/Upgrade 172.83
267 TestNoKubernetes/serial/StartWithStopK8s 26.7
268 TestNoKubernetes/serial/Start 19.38
277 TestPause/serial/Start 82.49
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.55
279 TestNoKubernetes/serial/ProfileList 3.4
280 TestNoKubernetes/serial/Stop 2.06
281 TestNoKubernetes/serial/StartNoArgs 10.73
282 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.59
283 TestStoppedBinaryUpgrade/MinikubeLogs 2.93
295 TestPause/serial/SecondStartNoReconfiguration 89.51
296 TestPause/serial/Pause 1.36
297 TestPause/serial/VerifyStatus 0.69
298 TestPause/serial/Unpause 1.52
299 TestPause/serial/PauseAgain 1.52
300 TestPause/serial/DeletePaused 12.77
301 TestPause/serial/VerifyDeletedResources 1.92
303 TestStartStop/group/old-k8s-version/serial/FirstStart 76.96
305 TestStartStop/group/no-preload/serial/FirstStart 116.25
306 TestStartStop/group/old-k8s-version/serial/DeployApp 9.76
307 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.62
308 TestStartStop/group/old-k8s-version/serial/Stop 14.05
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.58
310 TestStartStop/group/old-k8s-version/serial/SecondStart 33.38
311 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 23.01
312 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.37
313 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.49
314 TestStartStop/group/old-k8s-version/serial/Pause 5.62
315 TestStartStop/group/no-preload/serial/DeployApp 11.73
317 TestStartStop/group/embed-certs/serial/FirstStart 96.59
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 91.51
320 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.94
321 TestStartStop/group/no-preload/serial/Stop 20.03
322 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.62
323 TestStartStop/group/no-preload/serial/SecondStart 60.29
324 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
325 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.24
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.66
327 TestStartStop/group/embed-certs/serial/DeployApp 10.67
328 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.49
329 TestStartStop/group/no-preload/serial/Pause 5.26
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.57
331 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.42
332 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.24
333 TestStartStop/group/embed-certs/serial/Stop 16.21
335 TestStartStop/group/newest-cni/serial/FirstStart 62.7
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.6
337 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.65
338 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 58.17
339 TestStartStop/group/embed-certs/serial/SecondStart 59.91
340 TestStartStop/group/newest-cni/serial/DeployApp 0
341 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.8
342 TestStartStop/group/newest-cni/serial/Stop 12.19
343 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.02
344 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
345 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.54
346 TestStartStop/group/newest-cni/serial/SecondStart 27.52
347 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.31
348 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.33
349 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.49
350 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.57
351 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.52
352 TestStartStop/group/embed-certs/serial/Pause 5.97
353 TestNetworkPlugins/group/auto/Start 98.4
354 TestNetworkPlugins/group/kindnet/Start 114.41
355 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
357 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.49
359 TestNetworkPlugins/group/calico/Start 118.23
360 TestNetworkPlugins/group/custom-flannel/Start 95.39
361 TestNetworkPlugins/group/auto/KubeletFlags 0.64
362 TestNetworkPlugins/group/auto/NetCatPod 17.45
363 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
364 TestNetworkPlugins/group/auto/DNS 0.25
365 TestNetworkPlugins/group/auto/Localhost 0.22
366 TestNetworkPlugins/group/auto/HairPin 0.21
367 TestNetworkPlugins/group/kindnet/KubeletFlags 0.72
368 TestNetworkPlugins/group/kindnet/NetCatPod 20.1
369 TestNetworkPlugins/group/kindnet/DNS 0.24
370 TestNetworkPlugins/group/kindnet/Localhost 0.23
371 TestNetworkPlugins/group/kindnet/HairPin 0.25
372 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.59
373 TestNetworkPlugins/group/calico/ControllerPod 6.01
374 TestNetworkPlugins/group/custom-flannel/NetCatPod 16.31
375 TestNetworkPlugins/group/calico/KubeletFlags 0.71
376 TestNetworkPlugins/group/false/Start 109.03
377 TestNetworkPlugins/group/calico/NetCatPod 27.62
378 TestNetworkPlugins/group/custom-flannel/DNS 0.27
379 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
380 TestNetworkPlugins/group/custom-flannel/HairPin 0.25
381 TestNetworkPlugins/group/enable-default-cni/Start 100.9
382 TestNetworkPlugins/group/calico/DNS 0.26
383 TestNetworkPlugins/group/calico/Localhost 0.22
384 TestNetworkPlugins/group/calico/HairPin 0.21
385 TestNetworkPlugins/group/flannel/Start 82.73
386 TestNetworkPlugins/group/bridge/Start 80.91
387 TestNetworkPlugins/group/false/KubeletFlags 0.59
388 TestNetworkPlugins/group/false/NetCatPod 14.45
389 TestNetworkPlugins/group/false/DNS 0.28
390 TestNetworkPlugins/group/false/Localhost 0.25
391 TestNetworkPlugins/group/false/HairPin 0.24
392 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.56
393 TestNetworkPlugins/group/enable-default-cni/NetCatPod 15.51
394 TestNetworkPlugins/group/flannel/ControllerPod 6.01
395 TestNetworkPlugins/group/flannel/KubeletFlags 0.61
396 TestNetworkPlugins/group/flannel/NetCatPod 14.49
397 TestNetworkPlugins/group/enable-default-cni/DNS 0.32
398 TestNetworkPlugins/group/enable-default-cni/Localhost 0.26
399 TestNetworkPlugins/group/enable-default-cni/HairPin 0.23
400 TestNetworkPlugins/group/bridge/KubeletFlags 0.59
401 TestNetworkPlugins/group/bridge/NetCatPod 15.57
402 TestNetworkPlugins/group/flannel/DNS 0.25
403 TestNetworkPlugins/group/flannel/Localhost 0.28
404 TestNetworkPlugins/group/flannel/HairPin 0.3
405 TestNetworkPlugins/group/kubenet/Start 103.64
406 TestNetworkPlugins/group/bridge/DNS 0.27
407 TestNetworkPlugins/group/bridge/Localhost 0.25
408 TestNetworkPlugins/group/bridge/HairPin 0.24
409 TestNetworkPlugins/group/kubenet/KubeletFlags 0.57
410 TestNetworkPlugins/group/kubenet/NetCatPod 14.48
411 TestNetworkPlugins/group/kubenet/DNS 0.23
412 TestNetworkPlugins/group/kubenet/Localhost 0.2
413 TestNetworkPlugins/group/kubenet/HairPin 0.2
x
+
TestDownloadOnly/v1.28.0/json-events (6.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-427100 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-427100 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker: (6.4528399s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1027 18:56:53.971949   10564 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1027 18:56:54.085315   10564 preload.go:198] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
--- PASS: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-427100
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-427100: exit status 85 (751.9255ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                       ARGS                                                                        │       PROFILE        │       USER        │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-427100 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker │ download-only-427100 │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 18:56:47
	Running on machine: minikube4
	Binary: Built with gc go1.24.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 18:56:47.592261   11960 out.go:360] Setting OutFile to fd 668 ...
	I1027 18:56:47.633730   11960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:47.633730   11960 out.go:374] Setting ErrFile to fd 672...
	I1027 18:56:47.633730   11960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1027 18:56:47.645312   11960 root.go:314] Error reading config file at C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I1027 18:56:47.651365   11960 out.go:368] Setting JSON to true
	I1027 18:56:47.654394   11960 start.go:131] hostinfo: {"hostname":"minikube4","uptime":657,"bootTime":1761590749,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6456 Build 19045.6456","kernelVersion":"10.0.19045.6456 Build 19045.6456","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1027 18:56:47.654394   11960 start.go:139] gopshost.Virtualization returned error: not implemented yet
	I1027 18:56:47.659672   11960 out.go:99] [download-only-427100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
	W1027 18:56:47.659672   11960 preload.go:349] Failed to list preload files: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I1027 18:56:47.659672   11960 notify.go:220] Checking for updates...
	I1027 18:56:47.661673   11960 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1027 18:56:47.663680   11960 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1027 18:56:47.665695   11960 out.go:171] MINIKUBE_LOCATION=21801
	I1027 18:56:47.668704   11960 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1027 18:56:47.671792   11960 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1027 18:56:47.672921   11960 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 18:56:47.882596   11960 docker.go:123] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1027 18:56:47.889263   11960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 18:56:48.614687   11960 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:69 SystemTime:2025-10-27 18:56:48.587298126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1027 18:56:48.618767   11960 out.go:99] Using the docker driver based on user configuration
	I1027 18:56:48.618862   11960 start.go:305] selected driver: docker
	I1027 18:56:48.618913   11960 start.go:925] validating driver "docker" against <nil>
	I1027 18:56:48.630321   11960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 18:56:48.879994   11960 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:69 SystemTime:2025-10-27 18:56:48.862772578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1027 18:56:48.879994   11960 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 18:56:48.919753   11960 start_flags.go:410] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1027 18:56:48.920345   11960 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1027 18:56:48.925069   11960 out.go:171] Using Docker Desktop driver with root privileges
	I1027 18:56:48.927801   11960 cni.go:84] Creating CNI manager for ""
	I1027 18:56:48.927801   11960 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1027 18:56:48.927801   11960 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1027 18:56:48.928247   11960 start.go:349] cluster config:
	{Name:download-only-427100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-427100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 18:56:48.930657   11960 out.go:99] Starting "download-only-427100" primary control-plane node in "download-only-427100" cluster
	I1027 18:56:48.930657   11960 cache.go:123] Beginning downloading kic base image for docker with docker
	I1027 18:56:48.932097   11960 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1027 18:56:48.932097   11960 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1027 18:56:48.932097   11960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 18:56:48.977724   11960 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1027 18:56:48.977816   11960 cache.go:58] Caching tarball of preloaded images
	I1027 18:56:48.978102   11960 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1027 18:56:48.980981   11960 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1027 18:56:48.981030   11960 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1027 18:56:48.993693   11960 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 18:56:48.993693   11960 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1760939008-21773@sha256_d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8.tar
	I1027 18:56:48.993693   11960 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1760939008-21773@sha256_d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8.tar
	I1027 18:56:48.993693   11960 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1027 18:56:48.994968   11960 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 18:56:49.059234   11960 preload.go:290] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1027 18:56:49.060203   11960 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1027 18:56:52.302139   11960 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on docker
	I1027 18:56:52.302679   11960 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-427100\config.json ...
	I1027 18:56:52.303145   11960 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-427100\config.json: {Name:mk547c3ee05168d91a7e3cec8da93e8cc805c745 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:52.303388   11960 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1027 18:56:52.304078   11960 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v1.28.0/kubectl.exe
	
	
	* The control-plane node download-only-427100 host does not exist
	  To start a cluster, run: "minikube start -p download-only-427100"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (1.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1591319s)
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (1.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-427100
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (5.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-021800 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-021800 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker: (5.4284491s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (5.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1027 18:57:02.031139   10564 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
I1027 18:57:02.031139   10564 preload.go:198] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
--- PASS: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-021800
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-021800: exit status 85 (649.6152ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │       PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-427100 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker │ download-only-427100 │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ delete  │ --all                                                                                                                                             │ minikube             │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-427100                                                                                                                           │ download-only-427100 │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ start   │ -o=json --download-only -p download-only-021800 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker │ download-only-021800 │ minikube4\jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 18:56:56
	Running on machine: minikube4
	Binary: Built with gc go1.24.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 18:56:56.680985    2768 out.go:360] Setting OutFile to fd 684 ...
	I1027 18:56:56.722769    2768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:56.722769    2768 out.go:374] Setting ErrFile to fd 696...
	I1027 18:56:56.722769    2768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:56.737770    2768 out.go:368] Setting JSON to true
	I1027 18:56:56.739772    2768 start.go:131] hostinfo: {"hostname":"minikube4","uptime":666,"bootTime":1761590749,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6456 Build 19045.6456","kernelVersion":"10.0.19045.6456 Build 19045.6456","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1027 18:56:56.739772    2768 start.go:139] gopshost.Virtualization returned error: not implemented yet
	I1027 18:56:57.005048    2768 out.go:99] [download-only-021800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
	I1027 18:56:57.005388    2768 notify.go:220] Checking for updates...
	I1027 18:56:57.012303    2768 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1027 18:56:57.024780    2768 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1027 18:56:57.044258    2768 out.go:171] MINIKUBE_LOCATION=21801
	I1027 18:56:57.058426    2768 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1027 18:56:57.070268    2768 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1027 18:56:57.070864    2768 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 18:56:57.203347    2768 docker.go:123] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1027 18:56:57.210537    2768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 18:56:57.447159    2768 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:69 SystemTime:2025-10-27 18:56:57.427832626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1027 18:56:57.464442    2768 out.go:99] Using the docker driver based on user configuration
	I1027 18:56:57.465443    2768 start.go:305] selected driver: docker
	I1027 18:56:57.465443    2768 start.go:925] validating driver "docker" against <nil>
	I1027 18:56:57.477279    2768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 18:56:57.722859    2768 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:69 SystemTime:2025-10-27 18:56:57.702778407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1027 18:56:57.722859    2768 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 18:56:57.761379    2768 start_flags.go:410] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1027 18:56:57.761379    2768 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1027 18:56:57.764370    2768 out.go:171] Using Docker Desktop driver with root privileges
	I1027 18:56:57.766370    2768 cni.go:84] Creating CNI manager for ""
	I1027 18:56:57.766370    2768 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1027 18:56:57.766370    2768 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1027 18:56:57.766370    2768 start.go:349] cluster config:
	{Name:download-only-021800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-021800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 18:56:57.768370    2768 out.go:99] Starting "download-only-021800" primary control-plane node in "download-only-021800" cluster
	I1027 18:56:57.768370    2768 cache.go:123] Beginning downloading kic base image for docker with docker
	I1027 18:56:57.771370    2768 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1027 18:56:57.771370    2768 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1027 18:56:57.771370    2768 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 18:56:57.812375    2768 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
	I1027 18:56:57.813380    2768 cache.go:58] Caching tarball of preloaded images
	I1027 18:56:57.813380    2768 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1027 18:56:57.816374    2768 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1027 18:56:57.816374    2768 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1027 18:56:57.824371    2768 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 18:56:57.824371    2768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1760939008-21773@sha256_d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8.tar
	I1027 18:56:57.824371    2768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1760939008-21773@sha256_d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8.tar
	I1027 18:56:57.824371    2768 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1027 18:56:57.825379    2768 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1027 18:56:57.825379    2768 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1027 18:56:57.825379    2768 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1027 18:56:57.885906    2768 preload.go:290] Got checksum from GCS API "d7f0ccd752ff15c628c6fc8ef8c8033e"
	I1027 18:56:57.885906    2768 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4?checksum=md5:d7f0ccd752ff15c628c6fc8ef8c8033e -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
	I1027 18:57:00.815750    2768 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1027 18:57:00.816645    2768 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-021800\config.json ...
	I1027 18:57:00.816645    2768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-021800\config.json: {Name:mk30071fe63442471f76b360bc26200b783860e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:00.834725    2768 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1027 18:57:00.835321    2768 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v1.34.1/kubectl.exe
	
	
	* The control-plane node download-only-021800 host does not exist
	  To start a cluster, run: "minikube start -p download-only-021800"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-021800
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.80s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.31s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-663900 --alsologtostderr --driver=docker
aaa_download_only_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-663900 --alsologtostderr --driver=docker: (1.3605404s)
helpers_test.go:175: Cleaning up "download-docker-663900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-663900
--- PASS: TestDownloadOnlyKic (2.31s)

                                                
                                    
x
+
TestBinaryMirror (2.18s)

                                                
                                                
=== RUN   TestBinaryMirror
I1027 18:57:07.828804   10564 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/windows/amd64/kubectl.exe.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-513400 --alsologtostderr --binary-mirror http://127.0.0.1:54012 --driver=docker
aaa_download_only_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-513400 --alsologtostderr --binary-mirror http://127.0.0.1:54012 --driver=docker: (1.4076023s)
helpers_test.go:175: Cleaning up "binary-mirror-513400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-513400
--- PASS: TestBinaryMirror (2.18s)

                                                
                                    
x
+
TestOffline (137.31s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-066700 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-066700 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker: (2m12.6232646s)
helpers_test.go:175: Cleaning up "offline-docker-066700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-066700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-066700: (4.6882868s)
--- PASS: TestOffline (137.31s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-057200
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-057200: exit status 85 (218.5381ms)

                                                
                                                
-- stdout --
	* Profile "addons-057200" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-057200"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-057200
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-057200: exit status 85 (203.1762ms)

                                                
                                                
-- stdout --
	* Profile "addons-057200" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-057200"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/Setup (405.37s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-057200 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-057200 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (6m45.3668097s)
--- PASS: TestAddons/Setup (405.37s)

                                                
                                    
x
+
TestAddons/serial/Volcano (51.06s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:868: volcano-scheduler stabilized in 18.1425ms
addons_test.go:884: volcano-controller stabilized in 18.2052ms
addons_test.go:876: volcano-admission stabilized in 18.2052ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-vwg86" [804d9d67-8bed-4a66-8b16-1cfea7e0431d] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.0053486s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-rr7rn" [45fa841b-4a40-4519-b0c2-cdba92783d91] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.0065216s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-sssht" [8830ecde-8901-43a9-abfc-f98045b02177] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.0065766s
addons_test.go:903: (dbg) Run:  kubectl --context addons-057200 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-057200 create -f testdata\vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-057200 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [b1c6f7d6-6f9d-49e8-8295-9ed3c5e2072f] Pending
helpers_test.go:352: "test-job-nginx-0" [b1c6f7d6-6f9d-49e8-8295-9ed3c5e2072f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [b1c6f7d6-6f9d-49e8-8295-9ed3c5e2072f] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 22.0065323s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-057200 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-057200 addons disable volcano --alsologtostderr -v=1: (12.299003s)
--- PASS: TestAddons/serial/Volcano (51.06s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-057200 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-057200 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-057200 create -f testdata\busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-057200 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6ef309d9-720d-434b-a302-0726481051f0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6ef309d9-720d-434b-a302-0726481051f0] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.0052594s
addons_test.go:694: (dbg) Run:  kubectl --context addons-057200 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-057200 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-057200 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-057200 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.16s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (1.78s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 8.178ms
addons_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-057200
addons_test.go:332: (dbg) Run:  kubectl --context addons-057200 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-057200 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-057200 addons disable registry-creds --alsologtostderr -v=1: (1.1613103s)
--- PASS: TestAddons/parallel/RegistryCreds (1.78s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.49s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-v4n5r" [20fc0033-cd0f-4ba3-8210-2b0d539fee86] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0069832s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-057200 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.49s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (8.04s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 8.3019ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-xjmqs" [2be94b2a-9529-4440-9355-8c4456f832f4] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.006657s
addons_test.go:463: (dbg) Run:  kubectl --context addons-057200 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-057200 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-057200 addons disable metrics-server --alsologtostderr -v=1: (1.8901217s)
--- PASS: TestAddons/parallel/MetricsServer (8.04s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.44s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1027 19:05:41.222366   10564 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1027 19:05:41.227363   10564 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1027 19:05:41.227363   10564 kapi.go:107] duration metric: took 4.9974ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.9974ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-057200 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-057200 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [59e75128-8e8e-4a76-85c6-afcff9d5eaed] Pending
helpers_test.go:352: "task-pv-pod" [59e75128-8e8e-4a76-85c6-afcff9d5eaed] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [59e75128-8e8e-4a76-85c6-afcff9d5eaed] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.0061006s
addons_test.go:572: (dbg) Run:  kubectl --context addons-057200 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-057200 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-057200 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-057200 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-057200 delete pod task-pv-pod: (1.6499351s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-057200 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-057200 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-057200 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [5dec47fe-29f2-4615-8f8a-3c8df271664b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [5dec47fe-29f2-4615-8f8a-3c8df271664b] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0059877s
addons_test.go:614: (dbg) Run:  kubectl --context addons-057200 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-057200 delete pod task-pv-pod-restore: (1.3864579s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-057200 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-057200 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-057200 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-057200 addons disable volumesnapshots --alsologtostderr -v=1: (1.6087917s)
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-057200 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-057200 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.452615s)
--- PASS: TestAddons/parallel/CSI (49.44s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (36.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-057200 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-057200 --alsologtostderr -v=1: (1.8100009s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-rdlxw" [54cccd14-ee93-4e2c-a6f8-dc20c71bdc67] Pending
helpers_test.go:352: "headlamp-6945c6f4d-rdlxw" [54cccd14-ee93-4e2c-a6f8-dc20c71bdc67] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-rdlxw" [54cccd14-ee93-4e2c-a6f8-dc20c71bdc67] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-rdlxw" [54cccd14-ee93-4e2c-a6f8-dc20c71bdc67] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 28.0050035s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-057200 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-057200 addons disable headlamp --alsologtostderr -v=1: (6.4337936s)
--- PASS: TestAddons/parallel/Headlamp (36.25s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.05s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-g77fv" [f7510ad5-7e4a-42a8-8136-5e0ed5276d46] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0059119s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-057200 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-057200 addons disable cloud-spanner --alsologtostderr -v=1: (1.0160376s)
--- PASS: TestAddons/parallel/CloudSpanner (7.05s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.6s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-057200 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-057200 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-057200 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [0d84f41e-b7a9-4369-ae85-40d523e69a72] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [0d84f41e-b7a9-4369-ae85-40d523e69a72] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [0d84f41e-b7a9-4369-ae85-40d523e69a72] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.0050961s
addons_test.go:967: (dbg) Run:  kubectl --context addons-057200 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-057200 ssh "cat /opt/local-path-provisioner/pvc-90c0a81a-ac48-4dbe-aef2-405b073f27c7_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-057200 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-057200 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-057200 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-057200 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.6286794s)
--- PASS: TestAddons/parallel/LocalPath (58.60s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.96s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-9wq59" [16d9b0ac-900f-49ee-99a5-17bc3993ae56] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.087724s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-057200 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.96s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (13.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-jrlxc" [abc4e9b5-71ab-4d01-892c-8ad63bc5d74e] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.035674s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-057200 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-057200 addons disable yakd --alsologtostderr -v=1: (7.7721292s)
--- PASS: TestAddons/parallel/Yakd (13.81s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (7.68s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-2256v" [2ac33e4b-b6d5-4cb4-8f80-2d83d44d4672] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.0061027s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-057200 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-057200 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: (1.675226s)
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (7.68s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.12s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-057200
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-057200: (12.2624666s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-057200
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-057200
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-057200
--- PASS: TestAddons/StoppedEnableDisable (13.12s)

                                                
                                    
x
+
TestCertOptions (69.04s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-187200 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-187200 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (1m3.8472716s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-187200 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
I1027 20:03:20.975284   10564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-187200
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-187200 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-187200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-187200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-187200: (3.8910581s)
--- PASS: TestCertOptions (69.04s)

                                                
                                    
x
+
TestCertExpiration (286.54s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-729900 --memory=3072 --cert-expiration=3m --driver=docker
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-729900 --memory=3072 --cert-expiration=3m --driver=docker: (57.7234313s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-729900 --memory=3072 --cert-expiration=8760h --driver=docker
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-729900 --memory=3072 --cert-expiration=8760h --driver=docker: (44.6103686s)
helpers_test.go:175: Cleaning up "cert-expiration-729900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-729900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-729900: (4.209287s)
--- PASS: TestCertExpiration (286.54s)

                                                
                                    
x
+
TestDockerFlags (62.14s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-630600 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-630600 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (56.1505981s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-630600 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-630600 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
E1027 20:02:11.571669   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:175: Cleaning up "docker-flags-630600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-630600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-630600: (4.734421s)
--- PASS: TestDockerFlags (62.14s)

                                                
                                    
x
+
TestForceSystemdFlag (106.6s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-066700 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-066700 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker: (1m41.6295663s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-066700 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-066700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-066700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-066700: (4.2517876s)
--- PASS: TestForceSystemdFlag (106.60s)

                                                
                                    
x
+
TestForceSystemdEnv (59.01s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-950400 --memory=3072 --alsologtostderr -v=5 --driver=docker
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-950400 --memory=3072 --alsologtostderr -v=5 --driver=docker: (54.4970601s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-950400 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-950400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-950400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-950400: (3.8729586s)
--- PASS: TestForceSystemdEnv (59.01s)

                                                
                                    
x
+
TestErrorSpam/start (2.66s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-570800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-570800 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-570800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-570800 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-570800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-570800 start --dry-run
--- PASS: TestErrorSpam/start (2.66s)

                                                
                                    
x
+
TestErrorSpam/status (2.17s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-570800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-570800 status
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-570800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-570800 status
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-570800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-570800 status
--- PASS: TestErrorSpam/status (2.17s)

                                                
                                    
x
+
TestErrorSpam/pause (2.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-570800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-570800 pause
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-570800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-570800 pause: (1.1271874s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-570800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-570800 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-570800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-570800 pause
--- PASS: TestErrorSpam/pause (2.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-570800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-570800 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-570800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-570800 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-570800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-570800 unpause
--- PASS: TestErrorSpam/unpause (2.65s)

                                                
                                    
x
+
TestErrorSpam/stop (18.71s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-570800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-570800 stop
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-570800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-570800 stop: (11.8938499s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-570800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-570800 stop
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-570800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-570800 stop: (3.2779482s)
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-570800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-570800 stop
error_spam_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-570800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-570800 stop: (3.5367211s)
--- PASS: TestErrorSpam/stop (18.71s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\10564\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (83.09s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-536500 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker
E1027 19:08:55.598172   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:08:55.605389   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:08:55.617545   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:08:55.638934   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:08:55.681775   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:08:55.763750   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:08:55.926197   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:08:56.248286   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:08:56.890249   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:08:58.171895   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:09:00.734257   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:09:05.857109   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:09:16.098600   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:09:36.580515   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-536500 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker: (1m23.0791958s)
--- PASS: TestFunctional/serial/StartWithProxy (83.09s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (59.02s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1027 19:09:38.036352   10564 config.go:182] Loaded profile config "functional-536500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-536500 --alsologtostderr -v=8
E1027 19:10:17.544162   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-536500 --alsologtostderr -v=8: (59.0193613s)
functional_test.go:678: soft start took 59.0209132s for "functional-536500" cluster.
I1027 19:10:37.057021   10564 config.go:182] Loaded profile config "functional-536500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (59.02s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.09s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-536500 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (10.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-536500 cache add registry.k8s.io/pause:3.1: (3.9661898s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-536500 cache add registry.k8s.io/pause:3.3: (3.0821082s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-536500 cache add registry.k8s.io/pause:latest: (3.1295568s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (10.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (4.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-536500 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2297073393\001
functional_test.go:1092: (dbg) Done: docker build -t minikube-local-cache-test:functional-536500 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2297073393\001: (1.3841843s)
functional_test.go:1104: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 cache add minikube-local-cache-test:functional-536500
functional_test.go:1104: (dbg) Done: out/minikube-windows-amd64.exe -p functional-536500 cache add minikube-local-cache-test:functional-536500: (2.644252s)
functional_test.go:1109: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 cache delete minikube-local-cache-test:functional-536500
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-536500
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (4.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (4.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-536500 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (618.4542ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-windows-amd64.exe -p functional-536500 cache reload: (2.7364563s)
functional_test.go:1178: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (4.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.38s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 kubectl -- --context functional-536500 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.48s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out\kubectl.exe --context functional-536500 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.33s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (58.98s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-536500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1027 19:11:39.466787   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-536500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (58.9780151s)
functional_test.go:776: restart took 58.9780151s for "functional-536500" cluster.
I1027 19:11:57.587923   10564 config.go:182] Loaded profile config "functional-536500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (58.98s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-536500 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 logs
functional_test.go:1251: (dbg) Done: out/minikube-windows-amd64.exe -p functional-536500 logs: (1.8886327s)
--- PASS: TestFunctional/serial/LogsCmd (1.89s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.94s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd124159382\001\logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-windows-amd64.exe -p functional-536500 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd124159382\001\logs.txt: (1.9167563s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.94s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.37s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-536500 apply -f testdata\invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-536500
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-536500: exit status 115 (1.0379311s)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31597 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_service_9c977cb937a5c6299cc91c983e64e702e081bf76_2.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-536500 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (5.37s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-536500 config get cpus: exit status 14 (171.9605ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-536500 config get cpus: exit status 14 (156.0072ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-536500 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-536500 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (635.6245ms)

                                                
                                                
-- stdout --
	* [functional-536500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:13:20.053144    8100 out.go:360] Setting OutFile to fd 1668 ...
	I1027 19:13:20.094741    8100 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:13:20.094741    8100 out.go:374] Setting ErrFile to fd 1672...
	I1027 19:13:20.094741    8100 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:13:20.112198    8100 out.go:368] Setting JSON to false
	I1027 19:13:20.115667    8100 start.go:131] hostinfo: {"hostname":"minikube4","uptime":1650,"bootTime":1761590749,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6456 Build 19045.6456","kernelVersion":"10.0.19045.6456 Build 19045.6456","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1027 19:13:20.115667    8100 start.go:139] gopshost.Virtualization returned error: not implemented yet
	I1027 19:13:20.120293    8100 out.go:179] * [functional-536500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
	I1027 19:13:20.123923    8100 notify.go:220] Checking for updates...
	I1027 19:13:20.126331    8100 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1027 19:13:20.128921    8100 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:13:20.132218    8100 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1027 19:13:20.138270    8100 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:13:20.142036    8100 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:13:20.144900    8100 config.go:182] Loaded profile config "functional-536500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1027 19:13:20.145644    8100 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:13:20.272993    8100 docker.go:123] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1027 19:13:20.278983    8100 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:13:20.523419    8100 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:83 SystemTime:2025-10-27 19:13:20.498301595 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1027 19:13:20.528294    8100 out.go:179] * Using the docker driver based on existing profile
	I1027 19:13:20.530916    8100 start.go:305] selected driver: docker
	I1027 19:13:20.530916    8100 start.go:925] validating driver "docker" against &{Name:functional-536500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-536500 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:13:20.530916    8100 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:13:20.569445    8100 out.go:203] 
	W1027 19:13:20.572752    8100 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1027 19:13:20.575832    8100 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-536500 --dry-run --alsologtostderr -v=1 --driver=docker
--- PASS: TestFunctional/parallel/DryRun (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-536500 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-536500 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (658.4284ms)

                                                
                                                
-- stdout --
	* [functional-536500] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:13:17.343329    3120 out.go:360] Setting OutFile to fd 1244 ...
	I1027 19:13:17.389331    3120 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:13:17.389331    3120 out.go:374] Setting ErrFile to fd 1204...
	I1027 19:13:17.389331    3120 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:13:17.406943    3120 out.go:368] Setting JSON to false
	I1027 19:13:17.410007    3120 start.go:131] hostinfo: {"hostname":"minikube4","uptime":1647,"bootTime":1761590749,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6456 Build 19045.6456","kernelVersion":"10.0.19045.6456 Build 19045.6456","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1027 19:13:17.410007    3120 start.go:139] gopshost.Virtualization returned error: not implemented yet
	I1027 19:13:17.413450    3120 out.go:179] * [functional-536500] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
	I1027 19:13:17.417309    3120 notify.go:220] Checking for updates...
	I1027 19:13:17.420042    3120 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1027 19:13:17.425121    3120 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:13:17.428087    3120 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1027 19:13:17.430062    3120 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:13:17.437238    3120 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:13:17.440516    3120 config.go:182] Loaded profile config "functional-536500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1027 19:13:17.441973    3120 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:13:17.562718    3120 docker.go:123] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1027 19:13:17.569117    3120 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:13:17.809664    3120 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:83 SystemTime:2025-10-27 19:13:17.788781191 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1027 19:13:17.813028    3120 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1027 19:13:17.816749    3120 start.go:305] selected driver: docker
	I1027 19:13:17.816749    3120 start.go:925] validating driver "docker" against &{Name:functional-536500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-536500 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:13:17.816749    3120 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:13:17.869576    3120 out.go:203] 
	W1027 19:13:17.872329    3120 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1027 19:13:17.874498    3120 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 status
functional_test.go:875: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (64.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [361745e9-6fb4-4a83-8657-d98d20f92491] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0054551s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-536500 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-536500 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-536500 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-536500 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [796bc709-6091-428e-a0ac-e88cfe1622bd] Pending
helpers_test.go:352: "sp-pod" [796bc709-6091-428e-a0ac-e88cfe1622bd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [796bc709-6091-428e-a0ac-e88cfe1622bd] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 48.0065844s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-536500 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-536500 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-536500 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [e742e320-409e-48fb-874c-fe1f20d3ad50] Pending
helpers_test.go:352: "sp-pod" [e742e320-409e-48fb-874c-fe1f20d3ad50] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [e742e320-409e-48fb-874c-fe1f20d3ad50] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.0062368s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-536500 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (64.57s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (3.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 ssh -n functional-536500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 cp functional-536500:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd1295475243\001\cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 ssh -n functional-536500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 ssh -n functional-536500 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (3.61s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (70.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-536500 replace --force -f testdata\mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-jrn8s" [c3218ae4-ea6a-496c-a4ce-7ed478c4e615] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-jrn8s" [c3218ae4-ea6a-496c-a4ce-7ed478c4e615] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 46.0065571s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-536500 exec mysql-5bb876957f-jrn8s -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-536500 exec mysql-5bb876957f-jrn8s -- mysql -ppassword -e "show databases;": exit status 1 (245.8777ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1027 19:12:57.801001   10564 retry.go:31] will retry after 1.159795008s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-536500 exec mysql-5bb876957f-jrn8s -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-536500 exec mysql-5bb876957f-jrn8s -- mysql -ppassword -e "show databases;": exit status 1 (288.6749ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1027 19:12:59.257690   10564 retry.go:31] will retry after 1.109542808s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-536500 exec mysql-5bb876957f-jrn8s -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-536500 exec mysql-5bb876957f-jrn8s -- mysql -ppassword -e "show databases;": exit status 1 (292.752ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1027 19:13:00.667212   10564 retry.go:31] will retry after 3.244508447s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-536500 exec mysql-5bb876957f-jrn8s -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-536500 exec mysql-5bb876957f-jrn8s -- mysql -ppassword -e "show databases;": exit status 1 (357.1626ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1027 19:13:04.275985   10564 retry.go:31] will retry after 1.973102814s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-536500 exec mysql-5bb876957f-jrn8s -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-536500 exec mysql-5bb876957f-jrn8s -- mysql -ppassword -e "show databases;": exit status 1 (307.1871ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1027 19:13:06.565033   10564 retry.go:31] will retry after 3.13288967s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-536500 exec mysql-5bb876957f-jrn8s -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-536500 exec mysql-5bb876957f-jrn8s -- mysql -ppassword -e "show databases;": exit status 1 (340.2684ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1027 19:13:10.044997   10564 retry.go:31] will retry after 11.285728506s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-536500 exec mysql-5bb876957f-jrn8s -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (70.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/10564/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 ssh "sudo cat /etc/test/nested/copy/10564/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (3.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/10564.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 ssh "sudo cat /etc/ssl/certs/10564.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/10564.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 ssh "sudo cat /usr/share/ca-certificates/10564.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/105642.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 ssh "sudo cat /etc/ssl/certs/105642.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/105642.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 ssh "sudo cat /usr/share/ca-certificates/105642.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (3.79s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-536500 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-536500 ssh "sudo systemctl is-active crio": exit status 1 (588.3612ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2293: (dbg) Done: out/minikube-windows-amd64.exe license: (1.4066156s)
--- PASS: TestFunctional/parallel/License (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 version --short
--- PASS: TestFunctional/parallel/Version/short (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-536500 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-536500
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-536500
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-536500 image ls --format short --alsologtostderr:
I1027 19:13:23.739786   13732 out.go:360] Setting OutFile to fd 1740 ...
I1027 19:13:23.783970   13732 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:13:23.783970   13732 out.go:374] Setting ErrFile to fd 1744...
I1027 19:13:23.783970   13732 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:13:23.795727   13732 config.go:182] Loaded profile config "functional-536500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1027 19:13:23.795727   13732 config.go:182] Loaded profile config "functional-536500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1027 19:13:23.808079   13732 cli_runner.go:164] Run: docker container inspect functional-536500 --format={{.State.Status}}
I1027 19:13:23.868278   13732 ssh_runner.go:195] Run: systemctl --version
I1027 19:13:23.874080   13732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-536500
I1027 19:13:23.931133   13732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-536500\id_rsa Username:docker}
I1027 19:13:24.060816   13732 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-536500 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ registry.k8s.io/etcd                        │ 3.6.4-0           │ 5f1f5298c888d │ 195MB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/kube-scheduler              │ v1.34.1           │ 7dd6aaa1717ab │ 52.8MB │
│ docker.io/library/minikube-local-cache-test │ functional-536500 │ 77ff4c3cdcf26 │ 30B    │
│ docker.io/library/nginx                     │ latest            │ 657fdcd1c3659 │ 152MB  │
│ registry.k8s.io/kube-apiserver              │ v1.34.1           │ c3994bc696102 │ 88MB   │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ docker.io/kicbase/echo-server               │ functional-536500 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ docker.io/library/nginx                     │ alpine            │ 5e7abcdd20216 │ 52.8MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1           │ c80c8dbafe7dd │ 74.9MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1           │ fc25172553d79 │ 71.9MB │
│ docker.io/library/mysql                     │ 5.7               │ 5107333e08a87 │ 501MB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-536500 image ls --format table --alsologtostderr:
I1027 19:13:25.521596    7676 out.go:360] Setting OutFile to fd 1688 ...
I1027 19:13:25.569818    7676 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:13:25.569818    7676 out.go:374] Setting ErrFile to fd 1172...
I1027 19:13:25.569818    7676 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:13:25.581802    7676 config.go:182] Loaded profile config "functional-536500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1027 19:13:25.582805    7676 config.go:182] Loaded profile config "functional-536500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1027 19:13:25.593182    7676 cli_runner.go:164] Run: docker container inspect functional-536500 --format={{.State.Status}}
I1027 19:13:25.658803    7676 ssh_runner.go:195] Run: systemctl --version
I1027 19:13:25.663682    7676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-536500
I1027 19:13:25.721612    7676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-536500\id_rsa Username:docker}
I1027 19:13:25.860137    7676 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-536500 image ls --format json --alsologtostderr:
[{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"52800000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"52800000"},{"id":"657fdcd1c3659cf57cfaa13f40842e0a26b49ec9654d48fdefee9fc8259b4aab","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"152000000"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":[],"
repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"88000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"77ff4c3cdcf26a37f04e30807330845035591c6beefa632131b0710e28077784","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-536500"],"size":"30"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"71900000"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195
000000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-536500","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"74900000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-536500 image ls --format json --alsologtostderr:
I1027 19:13:25.041543   12696 out.go:360] Setting OutFile to fd 1524 ...
I1027 19:13:25.085762   12696 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:13:25.085762   12696 out.go:374] Setting ErrFile to fd 1520...
I1027 19:13:25.085762   12696 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:13:25.096762   12696 config.go:182] Loaded profile config "functional-536500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1027 19:13:25.096762   12696 config.go:182] Loaded profile config "functional-536500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1027 19:13:25.108764   12696 cli_runner.go:164] Run: docker container inspect functional-536500 --format={{.State.Status}}
I1027 19:13:25.164764   12696 ssh_runner.go:195] Run: systemctl --version
I1027 19:13:25.171552   12696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-536500
I1027 19:13:25.227275   12696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-536500\id_rsa Username:docker}
I1027 19:13:25.371837   12696 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-536500 image ls --format yaml --alsologtostderr:
- id: 657fdcd1c3659cf57cfaa13f40842e0a26b49ec9654d48fdefee9fc8259b4aab
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "152000000"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "74900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "52800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "71900000"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195000000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-536500
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 77ff4c3cdcf26a37f04e30807330845035591c6beefa632131b0710e28077784
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-536500
size: "30"
- id: 5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "52800000"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "88000000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-536500 image ls --format yaml --alsologtostderr:
I1027 19:13:24.199621    2260 out.go:360] Setting OutFile to fd 1652 ...
I1027 19:13:24.242596    2260 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:13:24.242596    2260 out.go:374] Setting ErrFile to fd 1496...
I1027 19:13:24.242596    2260 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:13:24.253342    2260 config.go:182] Loaded profile config "functional-536500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1027 19:13:24.254387    2260 config.go:182] Loaded profile config "functional-536500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1027 19:13:24.267194    2260 cli_runner.go:164] Run: docker container inspect functional-536500 --format={{.State.Status}}
I1027 19:13:24.330068    2260 ssh_runner.go:195] Run: systemctl --version
I1027 19:13:24.335210    2260 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-536500
I1027 19:13:24.388848    2260 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-536500\id_rsa Username:docker}
I1027 19:13:24.520799    2260 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-536500 ssh pgrep buildkitd: exit status 1 (571.8254ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 image build -t localhost/my-image:functional-536500 testdata\build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-windows-amd64.exe -p functional-536500 image build -t localhost/my-image:functional-536500 testdata\build --alsologtostderr: (3.8852592s)
functional_test.go:338: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-536500 image build -t localhost/my-image:functional-536500 testdata\build --alsologtostderr:
I1027 19:13:25.240401    6980 out.go:360] Setting OutFile to fd 1496 ...
I1027 19:13:25.304662    6980 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:13:25.304662    6980 out.go:374] Setting ErrFile to fd 1424...
I1027 19:13:25.304662    6980 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:13:25.317710    6980 config.go:182] Loaded profile config "functional-536500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1027 19:13:25.337723    6980 config.go:182] Loaded profile config "functional-536500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1027 19:13:25.349910    6980 cli_runner.go:164] Run: docker container inspect functional-536500 --format={{.State.Status}}
I1027 19:13:25.410595    6980 ssh_runner.go:195] Run: systemctl --version
I1027 19:13:25.416610    6980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-536500
I1027 19:13:25.468598    6980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-536500\id_rsa Username:docker}
I1027 19:13:25.603103    6980 build_images.go:161] Building image from path: C:\Users\jenkins.minikube4\AppData\Local\Temp\build.3387140547.tar
I1027 19:13:25.610751    6980 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1027 19:13:25.632621    6980 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3387140547.tar
I1027 19:13:25.642954    6980 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3387140547.tar: stat -c "%s %y" /var/lib/minikube/build/build.3387140547.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3387140547.tar': No such file or directory
I1027 19:13:25.642954    6980 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\AppData\Local\Temp\build.3387140547.tar --> /var/lib/minikube/build/build.3387140547.tar (3072 bytes)
I1027 19:13:25.683195    6980 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3387140547
I1027 19:13:25.706020    6980 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3387140547 -xf /var/lib/minikube/build/build.3387140547.tar
I1027 19:13:25.721612    6980 docker.go:361] Building image: /var/lib/minikube/build/build.3387140547
I1027 19:13:25.730829    6980 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-536500 /var/lib/minikube/build/build.3387140547
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile:
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:5b18ea23b98825f358530d8b029c656e50204d89ecebcb2c42d565f4ecb278ec done
#8 naming to localhost/my-image:functional-536500 0.0s done
#8 DONE 0.2s
I1027 19:13:28.968333    6980 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-536500 /var/lib/minikube/build/build.3387140547: (3.2374446s)
I1027 19:13:28.975559    6980 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3387140547
I1027 19:13:28.998477    6980 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3387140547.tar
I1027 19:13:29.012671    6980 build_images.go:217] Built localhost/my-image:functional-536500 from C:\Users\jenkins.minikube4\AppData\Local\Temp\build.3387140547.tar
I1027 19:13:29.012775    6980 build_images.go:133] succeeded building to: functional-536500
I1027 19:13:29.012845    6980 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.7343308s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-536500
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (11.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:514: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-536500 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-536500"
functional_test.go:514: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-536500 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-536500": (9.0234644s)
functional_test.go:537: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-536500 docker-env | Invoke-Expression ; docker images"
functional_test.go:537: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-536500 docker-env | Invoke-Expression ; docker images": (2.8777542s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (11.91s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 image load --daemon kicbase/echo-server:functional-536500 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-windows-amd64.exe -p functional-536500 image load --daemon kicbase/echo-server:functional-536500 --alsologtostderr: (3.1362451s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.65s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1330: Took "951.5039ms" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1344: Took "260.981ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 image load --daemon kicbase/echo-server:functional-536500 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-536500 image load --daemon kicbase/echo-server:functional-536500 --alsologtostderr: (3.7266946s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1376: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (1.0807117s)
functional_test.go:1381: Took "1.0807117s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1394: Took "292.9857ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-536500 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-536500 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-536500 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-536500 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 6616: OpenProcess: The parameter is incorrect.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-536500 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (45.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-536500 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [a6c3a054-193b-4040-85cd-16a22d5d8b15] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [a6c3a054-193b-4040-85cd-16a22d5d8b15] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 45.0452432s
I1027 19:13:02.945215   10564 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (45.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:250: (dbg) Done: docker pull kicbase/echo-server:latest: (1.6603609s)
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-536500
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 image load --daemon kicbase/echo-server:functional-536500 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-536500 image load --daemon kicbase/echo-server:functional-536500 --alsologtostderr: (2.8491343s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 image save kicbase/echo-server:functional-536500 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 image rm kicbase/echo-server:functional-536500 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-536500
functional_test.go:439: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 image save --daemon kicbase/echo-server:functional-536500 --alsologtostderr
functional_test.go:439: (dbg) Done: out/minikube-windows-amd64.exe -p functional-536500 image save --daemon kicbase/echo-server:functional-536500 --alsologtostderr: (1.1099603s)
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-536500
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-536500 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-536500 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 2624: TerminateProcess: Access is denied.
helpers_test.go:525: unable to kill pid 9064: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-536500 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-536500 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-4tqcn" [42a1a18d-883e-49e4-9148-ae747930f8c4] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-4tqcn" [42a1a18d-883e-49e4-9148-ae747930f8c4] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.0059291s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 service list
functional_test.go:1469: (dbg) Done: out/minikube-windows-amd64.exe -p functional-536500 service list: (1.278581s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-windows-amd64.exe -p functional-536500 service list -o json: (1.241103s)
functional_test.go:1504: Took "1.241103s" to run "out/minikube-windows-amd64.exe -p functional-536500 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-536500 service --namespace=default --https --url hello-node: exit status 1 (15.0107051s)

                                                
                                                
-- stdout --
	https://127.0.0.1:55412

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1532: found endpoint: https://127.0.0.1:55412
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-536500 service hello-node --url --format={{.IP}}: exit status 1 (15.0099978s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-536500 service hello-node --url
E1027 19:13:55.599457   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-536500 service hello-node --url: exit status 1 (15.009086s)

                                                
                                                
-- stdout --
	http://127.0.0.1:55441

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1575: found endpoint for hello-node: http://127.0.0.1:55441
E1027 19:14:23.309790   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-536500
--- PASS: TestFunctional/delete_echo-server_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-536500
--- PASS: TestFunctional/delete_my-image_image (0.07s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-536500
--- PASS: TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (228.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker
E1027 19:18:55.601150   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:22:11.545858   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:22:11.552666   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:22:11.564433   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:22:11.586978   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:22:11.628597   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:22:11.710147   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:22:11.871920   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:22:12.193723   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:22:12.836481   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe -p ha-831400 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker: (3m46.5459917s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 status --alsologtostderr -v 5
E1027 19:22:14.118473   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-831400 status --alsologtostderr -v 5: (1.7096308s)
--- PASS: TestMultiControlPlane/serial/StartCluster (228.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 kubectl -- rollout status deployment/busybox
E1027 19:22:16.680459   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe -p ha-831400 kubectl -- rollout status deployment/busybox: (4.1354627s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 kubectl -- exec busybox-7b57f96db7-fnrd5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 kubectl -- exec busybox-7b57f96db7-tsvjf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 kubectl -- exec busybox-7b57f96db7-xwsxv -- nslookup kubernetes.io
E1027 19:22:21.803426   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 kubectl -- exec busybox-7b57f96db7-fnrd5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 kubectl -- exec busybox-7b57f96db7-tsvjf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 kubectl -- exec busybox-7b57f96db7-xwsxv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 kubectl -- exec busybox-7b57f96db7-fnrd5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 kubectl -- exec busybox-7b57f96db7-tsvjf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 kubectl -- exec busybox-7b57f96db7-xwsxv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (2.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 kubectl -- exec busybox-7b57f96db7-fnrd5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 kubectl -- exec busybox-7b57f96db7-fnrd5 -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 kubectl -- exec busybox-7b57f96db7-tsvjf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 kubectl -- exec busybox-7b57f96db7-tsvjf -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 kubectl -- exec busybox-7b57f96db7-xwsxv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 kubectl -- exec busybox-7b57f96db7-xwsxv -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (2.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 node add --alsologtostderr -v 5
E1027 19:22:32.045979   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:22:52.529111   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe -p ha-831400 node add --alsologtostderr -v 5: (56.0136115s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-831400 status --alsologtostderr -v 5: (2.0522382s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-831400 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (2.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.1015012s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (2.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (35.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-windows-amd64.exe -p ha-831400 status --output json --alsologtostderr -v 5: (2.0366182s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 cp testdata\cp-test.txt ha-831400:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 cp ha-831400:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1823798514\001\cp-test_ha-831400.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 cp ha-831400:/home/docker/cp-test.txt ha-831400-m02:/home/docker/cp-test_ha-831400_ha-831400-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400-m02 "sudo cat /home/docker/cp-test_ha-831400_ha-831400-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 cp ha-831400:/home/docker/cp-test.txt ha-831400-m03:/home/docker/cp-test_ha-831400_ha-831400-m03.txt
E1027 19:23:33.491503   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400-m03 "sudo cat /home/docker/cp-test_ha-831400_ha-831400-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 cp ha-831400:/home/docker/cp-test.txt ha-831400-m04:/home/docker/cp-test_ha-831400_ha-831400-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400-m04 "sudo cat /home/docker/cp-test_ha-831400_ha-831400-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 cp testdata\cp-test.txt ha-831400-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 cp ha-831400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1823798514\001\cp-test_ha-831400-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 cp ha-831400-m02:/home/docker/cp-test.txt ha-831400:/home/docker/cp-test_ha-831400-m02_ha-831400.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400 "sudo cat /home/docker/cp-test_ha-831400-m02_ha-831400.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 cp ha-831400-m02:/home/docker/cp-test.txt ha-831400-m03:/home/docker/cp-test_ha-831400-m02_ha-831400-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400-m03 "sudo cat /home/docker/cp-test_ha-831400-m02_ha-831400-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 cp ha-831400-m02:/home/docker/cp-test.txt ha-831400-m04:/home/docker/cp-test_ha-831400-m02_ha-831400-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400-m04 "sudo cat /home/docker/cp-test_ha-831400-m02_ha-831400-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 cp testdata\cp-test.txt ha-831400-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 cp ha-831400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1823798514\001\cp-test_ha-831400-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 cp ha-831400-m03:/home/docker/cp-test.txt ha-831400:/home/docker/cp-test_ha-831400-m03_ha-831400.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400 "sudo cat /home/docker/cp-test_ha-831400-m03_ha-831400.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 cp ha-831400-m03:/home/docker/cp-test.txt ha-831400-m02:/home/docker/cp-test_ha-831400-m03_ha-831400-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400-m02 "sudo cat /home/docker/cp-test_ha-831400-m03_ha-831400-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 cp ha-831400-m03:/home/docker/cp-test.txt ha-831400-m04:/home/docker/cp-test_ha-831400-m03_ha-831400-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400-m04 "sudo cat /home/docker/cp-test_ha-831400-m03_ha-831400-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 cp testdata\cp-test.txt ha-831400-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 cp ha-831400-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1823798514\001\cp-test_ha-831400-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400-m04 "sudo cat /home/docker/cp-test.txt"
E1027 19:23:55.603815   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 cp ha-831400-m04:/home/docker/cp-test.txt ha-831400:/home/docker/cp-test_ha-831400-m04_ha-831400.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400 "sudo cat /home/docker/cp-test_ha-831400-m04_ha-831400.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 cp ha-831400-m04:/home/docker/cp-test.txt ha-831400-m02:/home/docker/cp-test_ha-831400-m04_ha-831400-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400-m02 "sudo cat /home/docker/cp-test_ha-831400-m04_ha-831400-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 cp ha-831400-m04:/home/docker/cp-test.txt ha-831400-m03:/home/docker/cp-test_ha-831400-m04_ha-831400-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 ssh -n ha-831400-m03 "sudo cat /home/docker/cp-test_ha-831400-m04_ha-831400-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (35.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p ha-831400 node stop m02 --alsologtostderr -v 5: (11.8708359s)
ha_test.go:371: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-831400 status --alsologtostderr -v 5: exit status 7 (1.6327402s)

                                                
                                                
-- stdout --
	ha-831400
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-831400-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-831400-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-831400-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:24:13.984824    3312 out.go:360] Setting OutFile to fd 1680 ...
	I1027 19:24:14.030573    3312 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:24:14.030573    3312 out.go:374] Setting ErrFile to fd 1128...
	I1027 19:24:14.030573    3312 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:24:14.043246    3312 out.go:368] Setting JSON to false
	I1027 19:24:14.043310    3312 mustload.go:65] Loading cluster: ha-831400
	I1027 19:24:14.043383    3312 notify.go:220] Checking for updates...
	I1027 19:24:14.043541    3312 config.go:182] Loaded profile config "ha-831400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1027 19:24:14.043541    3312 status.go:174] checking status of ha-831400 ...
	I1027 19:24:14.057388    3312 cli_runner.go:164] Run: docker container inspect ha-831400 --format={{.State.Status}}
	I1027 19:24:14.113236    3312 status.go:371] ha-831400 host status = "Running" (err=<nil>)
	I1027 19:24:14.113236    3312 host.go:66] Checking if "ha-831400" exists ...
	I1027 19:24:14.120723    3312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-831400
	I1027 19:24:14.173280    3312 host.go:66] Checking if "ha-831400" exists ...
	I1027 19:24:14.182160    3312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:24:14.187046    3312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-831400
	I1027 19:24:14.241536    3312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55474 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-831400\id_rsa Username:docker}
	I1027 19:24:14.379966    3312 ssh_runner.go:195] Run: systemctl --version
	I1027 19:24:14.400440    3312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:24:14.427519    3312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-831400
	I1027 19:24:14.485170    3312 kubeconfig.go:125] found "ha-831400" server: "https://127.0.0.1:55478"
	I1027 19:24:14.485243    3312 api_server.go:166] Checking apiserver status ...
	I1027 19:24:14.492333    3312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:24:14.520249    3312 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2359/cgroup
	I1027 19:24:14.533297    3312 api_server.go:182] apiserver freezer: "7:freezer:/docker/a6b5b808e5e8b19293a2d1ba04d4c6edeab3e7c62f81ab2e38e193d83955b246/kubepods/burstable/pod8b031211f898bc2553c259f5b57bc2aa/cb7f51fa9a9937e56b69345bf35e44cfbc720fab49fcc37dadc9054997b80878"
	I1027 19:24:14.539813    3312 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a6b5b808e5e8b19293a2d1ba04d4c6edeab3e7c62f81ab2e38e193d83955b246/kubepods/burstable/pod8b031211f898bc2553c259f5b57bc2aa/cb7f51fa9a9937e56b69345bf35e44cfbc720fab49fcc37dadc9054997b80878/freezer.state
	I1027 19:24:14.553476    3312 api_server.go:204] freezer state: "THAWED"
	I1027 19:24:14.553476    3312 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55478/healthz ...
	I1027 19:24:14.565160    3312 api_server.go:279] https://127.0.0.1:55478/healthz returned 200:
	ok
	I1027 19:24:14.565187    3312 status.go:463] ha-831400 apiserver status = Running (err=<nil>)
	I1027 19:24:14.565223    3312 status.go:176] ha-831400 status: &{Name:ha-831400 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:24:14.565249    3312 status.go:174] checking status of ha-831400-m02 ...
	I1027 19:24:14.576501    3312 cli_runner.go:164] Run: docker container inspect ha-831400-m02 --format={{.State.Status}}
	I1027 19:24:14.631317    3312 status.go:371] ha-831400-m02 host status = "Stopped" (err=<nil>)
	I1027 19:24:14.631317    3312 status.go:384] host is not running, skipping remaining checks
	I1027 19:24:14.631317    3312 status.go:176] ha-831400-m02 status: &{Name:ha-831400-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:24:14.631317    3312 status.go:174] checking status of ha-831400-m03 ...
	I1027 19:24:14.643908    3312 cli_runner.go:164] Run: docker container inspect ha-831400-m03 --format={{.State.Status}}
	I1027 19:24:14.698678    3312 status.go:371] ha-831400-m03 host status = "Running" (err=<nil>)
	I1027 19:24:14.698745    3312 host.go:66] Checking if "ha-831400-m03" exists ...
	I1027 19:24:14.706038    3312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-831400-m03
	I1027 19:24:14.762490    3312 host.go:66] Checking if "ha-831400-m03" exists ...
	I1027 19:24:14.770421    3312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:24:14.775151    3312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-831400-m03
	I1027 19:24:14.845242    3312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55594 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-831400-m03\id_rsa Username:docker}
	I1027 19:24:15.000029    3312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:24:15.025223    3312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-831400
	I1027 19:24:15.078813    3312 kubeconfig.go:125] found "ha-831400" server: "https://127.0.0.1:55478"
	I1027 19:24:15.078902    3312 api_server.go:166] Checking apiserver status ...
	I1027 19:24:15.088220    3312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:24:15.115441    3312 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2291/cgroup
	I1027 19:24:15.129112    3312 api_server.go:182] apiserver freezer: "7:freezer:/docker/8f51413337ccc26c7f9e6cbc34b08d553ee00d3fda9ee55cd27e0f4b2fa96c45/kubepods/burstable/podff7e81210e3a42b82114992e45144890/a2c2a5fb3c6a4d5b5eb189aa05b4103e4db9a4d93ca36a26618773a3b4248d7f"
	I1027 19:24:15.136112    3312 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8f51413337ccc26c7f9e6cbc34b08d553ee00d3fda9ee55cd27e0f4b2fa96c45/kubepods/burstable/podff7e81210e3a42b82114992e45144890/a2c2a5fb3c6a4d5b5eb189aa05b4103e4db9a4d93ca36a26618773a3b4248d7f/freezer.state
	I1027 19:24:15.150213    3312 api_server.go:204] freezer state: "THAWED"
	I1027 19:24:15.150213    3312 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55478/healthz ...
	I1027 19:24:15.159368    3312 api_server.go:279] https://127.0.0.1:55478/healthz returned 200:
	ok
	I1027 19:24:15.159368    3312 status.go:463] ha-831400-m03 apiserver status = Running (err=<nil>)
	I1027 19:24:15.159368    3312 status.go:176] ha-831400-m03 status: &{Name:ha-831400-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:24:15.159368    3312 status.go:174] checking status of ha-831400-m04 ...
	I1027 19:24:15.172991    3312 cli_runner.go:164] Run: docker container inspect ha-831400-m04 --format={{.State.Status}}
	I1027 19:24:15.229005    3312 status.go:371] ha-831400-m04 host status = "Running" (err=<nil>)
	I1027 19:24:15.229005    3312 host.go:66] Checking if "ha-831400-m04" exists ...
	I1027 19:24:15.235978    3312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-831400-m04
	I1027 19:24:15.288750    3312 host.go:66] Checking if "ha-831400-m04" exists ...
	I1027 19:24:15.297601    3312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:24:15.303829    3312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-831400-m04
	I1027 19:24:15.359426    3312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55727 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-831400-m04\id_rsa Username:docker}
	I1027 19:24:15.492866    3312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:24:15.511814    3312 status.go:176] ha-831400-m04 status: &{Name:ha-831400-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.6621534s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 node start m02 --alsologtostderr -v 5
E1027 19:24:55.414087   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p ha-831400 node start m02 --alsologtostderr -v 5: (52.2652827s)
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-windows-amd64.exe -p ha-831400 status --alsologtostderr -v 5: (2.5879371s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (55.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.216245s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (197.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 stop --alsologtostderr -v 5
E1027 19:25:18.677145   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-windows-amd64.exe -p ha-831400 stop --alsologtostderr -v 5: (38.6445417s)
ha_test.go:469: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 start --wait true --alsologtostderr -v 5
E1027 19:27:11.548062   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:27:39.257314   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-windows-amd64.exe -p ha-831400 start --wait true --alsologtostderr -v 5: (2m38.2163859s)
ha_test.go:474: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (197.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (14.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-windows-amd64.exe -p ha-831400 node delete m03 --alsologtostderr -v 5: (12.8875517s)
ha_test.go:495: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Done: out/minikube-windows-amd64.exe -p ha-831400 status --alsologtostderr -v 5: (1.5047781s)
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (14.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.604066s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (37.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 stop --alsologtostderr -v 5
E1027 19:28:55.606486   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p ha-831400 stop --alsologtostderr -v 5: (36.8255696s)
ha_test.go:539: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-831400 status --alsologtostderr -v 5: exit status 7 (356.3381ms)

                                                
                                                
-- stdout --
	ha-831400
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-831400-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-831400-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:29:24.939739   14328 out.go:360] Setting OutFile to fd 1872 ...
	I1027 19:29:24.982594   14328 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:29:24.982594   14328 out.go:374] Setting ErrFile to fd 1860...
	I1027 19:29:24.982594   14328 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:29:24.992328   14328 out.go:368] Setting JSON to false
	I1027 19:29:24.992328   14328 mustload.go:65] Loading cluster: ha-831400
	I1027 19:29:24.992328   14328 notify.go:220] Checking for updates...
	I1027 19:29:24.992918   14328 config.go:182] Loaded profile config "ha-831400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1027 19:29:24.992918   14328 status.go:174] checking status of ha-831400 ...
	I1027 19:29:25.006425   14328 cli_runner.go:164] Run: docker container inspect ha-831400 --format={{.State.Status}}
	I1027 19:29:25.061980   14328 status.go:371] ha-831400 host status = "Stopped" (err=<nil>)
	I1027 19:29:25.062021   14328 status.go:384] host is not running, skipping remaining checks
	I1027 19:29:25.062021   14328 status.go:176] ha-831400 status: &{Name:ha-831400 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:29:25.062021   14328 status.go:174] checking status of ha-831400-m02 ...
	I1027 19:29:25.073685   14328 cli_runner.go:164] Run: docker container inspect ha-831400-m02 --format={{.State.Status}}
	I1027 19:29:25.126335   14328 status.go:371] ha-831400-m02 host status = "Stopped" (err=<nil>)
	I1027 19:29:25.126335   14328 status.go:384] host is not running, skipping remaining checks
	I1027 19:29:25.126335   14328 status.go:176] ha-831400-m02 status: &{Name:ha-831400-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:29:25.126335   14328 status.go:174] checking status of ha-831400-m04 ...
	I1027 19:29:25.138382   14328 cli_runner.go:164] Run: docker container inspect ha-831400-m04 --format={{.State.Status}}
	I1027 19:29:25.191286   14328 status.go:371] ha-831400-m04 host status = "Stopped" (err=<nil>)
	I1027 19:29:25.191286   14328 status.go:384] host is not running, skipping remaining checks
	I1027 19:29:25.191286   14328 status.go:176] ha-831400-m04 status: &{Name:ha-831400-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (37.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (122.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 start --wait true --alsologtostderr -v 5 --driver=docker
ha_test.go:562: (dbg) Done: out/minikube-windows-amd64.exe -p ha-831400 start --wait true --alsologtostderr -v 5 --driver=docker: (2m0.3048634s)
ha_test.go:568: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 status --alsologtostderr -v 5
ha_test.go:568: (dbg) Done: out/minikube-windows-amd64.exe -p ha-831400 status --alsologtostderr -v 5: (1.5568468s)
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (122.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.5750206s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (88.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 node add --control-plane --alsologtostderr -v 5
E1027 19:32:11.551601   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-windows-amd64.exe -p ha-831400 node add --control-plane --alsologtostderr -v 5: (1m26.659984s)
ha_test.go:613: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-831400 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-windows-amd64.exe -p ha-831400 status --alsologtostderr -v 5: (2.0583005s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (88.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.0875524s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.09s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (54.64s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-583600 --driver=docker
E1027 19:33:55.608959   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-583600 --driver=docker: (54.6356268s)
--- PASS: TestImageBuild/serial/Setup (54.64s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (3.6s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-583600
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-583600: (3.5955793s)
--- PASS: TestImageBuild/serial/NormalBuild (3.60s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (2.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-583600
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-583600: (2.4092763s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (2.41s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.21s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-583600
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-583600: (1.2124058s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.21s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-583600
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-583600: (1.4082755s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.41s)

                                                
                                    
x
+
TestJSONOutput/start/Command (90.27s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-953900 --output=json --user=testUser --memory=3072 --wait=true --driver=docker
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-953900 --output=json --user=testUser --memory=3072 --wait=true --driver=docker: (1m30.2665184s)
--- PASS: TestJSONOutput/start/Command (90.27s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.17s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-953900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-953900 --output=json --user=testUser: (1.1742347s)
--- PASS: TestJSONOutput/pause/Command (1.17s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.92s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-953900 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.92s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.16s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-953900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-953900 --output=json --user=testUser: (12.1585548s)
--- PASS: TestJSONOutput/stop/Command (12.16s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.7s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-413900 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-413900 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (215.0888ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4b9f9412-89fb-4ff9-a7ae-9196cc67536f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-413900] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d12ebd2f-9579-47c2-a511-51d2e966bedc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"cdc344d8-87d3-4609-9cd9-76b5765629ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d9b40760-a4ea-448f-b108-f9e686a7292c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"67fe0271-27db-4406-b7e8-84fdcb6bee03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21801"}}
	{"specversion":"1.0","id":"1ffed03c-1516-4df8-877b-cf3fc395254c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"09923435-c499-4eaa-bfdb-891a87e562e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-413900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-413900
--- PASS: TestErrorJSONOutput (0.70s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (58.43s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-460900 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-460900 --network=: (54.8001359s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-460900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-460900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-460900: (3.5664936s)
--- PASS: TestKicCustomNetwork/create_custom_network (58.43s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (57.03s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-819400 --network=bridge
E1027 19:37:11.554046   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-819400 --network=bridge: (53.7978161s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-819400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-819400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-819400: (3.1766215s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (57.03s)

                                                
                                    
x
+
TestKicExistingNetwork (57.89s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1027 19:38:05.202894   10564 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1027 19:38:05.257877   10564 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1027 19:38:05.267021   10564 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1027 19:38:05.267056   10564 cli_runner.go:164] Run: docker network inspect existing-network
W1027 19:38:05.322193   10564 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1027 19:38:05.322193   10564 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1027 19:38:05.322193   10564 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1027 19:38:05.329089   10564 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1027 19:38:05.406064   10564 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0008017d0}
I1027 19:38:05.406064   10564 network_create.go:124] attempt to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1027 19:38:05.413007   10564 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
W1027 19:38:05.470304   10564 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network returned with exit code 1
W1027 19:38:05.470304   10564 network_create.go:149] failed to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
W1027 19:38:05.470304   10564 network_create.go:116] failed to create docker network existing-network 192.168.49.0/24, will retry: subnet is taken
I1027 19:38:05.495946   10564 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1027 19:38:05.510330   10564 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e68600}
I1027 19:38:05.510330   10564 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1027 19:38:05.518904   10564 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1027 19:38:05.663067   10564 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-067000 --network=existing-network
E1027 19:38:34.626501   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:38:55.612247   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-067000 --network=existing-network: (54.0927583s)
helpers_test.go:175: Cleaning up "existing-network-067000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-067000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-067000: (3.2011719s)
I1027 19:39:03.028517   10564 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (57.89s)

                                                
                                    
x
+
TestKicCustomSubnet (59.11s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-117600 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-117600 --subnet=192.168.60.0/24: (55.5202889s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-117600 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-117600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-117600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-117600: (3.5241149s)
--- PASS: TestKicCustomSubnet (59.11s)

                                                
                                    
x
+
TestKicStaticIP (56.43s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p static-ip-153100 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p static-ip-153100 --static-ip=192.168.200.200: (52.5296179s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe -p static-ip-153100 ip
helpers_test.go:175: Cleaning up "static-ip-153100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p static-ip-153100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p static-ip-153100: (3.5814819s)
--- PASS: TestKicStaticIP (56.43s)

                                                
                                    
x
+
TestMainNoArgs (0.16s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.16s)

                                                
                                    
x
+
TestMinikubeProfile (110.98s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-682500 --driver=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-682500 --driver=docker: (50.424697s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-682500 --driver=docker
E1027 19:41:58.688186   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:42:11.556932   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-682500 --driver=docker: (49.999853s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-682500
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.2607431s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-682500
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.2509606s)
helpers_test.go:175: Cleaning up "second-682500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-682500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-682500: (3.7406031s)
helpers_test.go:175: Cleaning up "first-682500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-682500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-682500: (3.8301695s)
--- PASS: TestMinikubeProfile (110.98s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (14.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-037100 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial2073327698\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-037100 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial2073327698\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (13.3316378s)
--- PASS: TestMountStart/serial/StartWithMountFirst (14.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.59s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-037100 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.59s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (14.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-037100 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial2073327698\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-037100 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial2073327698\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (13.0342885s)
--- PASS: TestMountStart/serial/StartWithMountSecond (14.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.58s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-037100 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.58s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.51s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-037100 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-037100 --alsologtostderr -v=5: (2.5077514s)
--- PASS: TestMountStart/serial/DeleteFirst (2.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.58s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-037100 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.58s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.88s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-037100
mount_start_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-037100: (1.8745792s)
--- PASS: TestMountStart/serial/Stop (1.88s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (10.99s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-037100
mount_start_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-037100: (9.9893766s)
--- PASS: TestMountStart/serial/RestartStopped (10.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.58s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-037100 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (137.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-817100 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker
E1027 19:43:55.614874   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-817100 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker: (2m16.1201554s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-817100 status --alsologtostderr: (1.0381968s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (137.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817100 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817100 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-817100 -- rollout status deployment/busybox: (3.3247954s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817100 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817100 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817100 -- exec busybox-7b57f96db7-fkcj4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817100 -- exec busybox-7b57f96db7-xl2vh -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817100 -- exec busybox-7b57f96db7-fkcj4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817100 -- exec busybox-7b57f96db7-xl2vh -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817100 -- exec busybox-7b57f96db7-fkcj4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817100 -- exec busybox-7b57f96db7-xl2vh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.02s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817100 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817100 -- exec busybox-7b57f96db7-fkcj4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817100 -- exec busybox-7b57f96db7-fkcj4 -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817100 -- exec busybox-7b57f96db7-xl2vh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817100 -- exec busybox-7b57f96db7-xl2vh -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.74s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (56.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-817100 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-817100 -v=5 --alsologtostderr: (55.6052989s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-817100 status --alsologtostderr: (1.3844265s)
--- PASS: TestMultiNode/serial/AddNode (56.99s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-817100 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.4683078s)
--- PASS: TestMultiNode/serial/ProfileList (1.47s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (20.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-817100 status --output json --alsologtostderr: (1.3901078s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 cp testdata\cp-test.txt multinode-817100:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 ssh -n multinode-817100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 cp multinode-817100:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile1248354530\001\cp-test_multinode-817100.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 ssh -n multinode-817100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 cp multinode-817100:/home/docker/cp-test.txt multinode-817100-m02:/home/docker/cp-test_multinode-817100_multinode-817100-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 ssh -n multinode-817100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 ssh -n multinode-817100-m02 "sudo cat /home/docker/cp-test_multinode-817100_multinode-817100-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 cp multinode-817100:/home/docker/cp-test.txt multinode-817100-m03:/home/docker/cp-test_multinode-817100_multinode-817100-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 ssh -n multinode-817100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 ssh -n multinode-817100-m03 "sudo cat /home/docker/cp-test_multinode-817100_multinode-817100-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 cp testdata\cp-test.txt multinode-817100-m02:/home/docker/cp-test.txt
E1027 19:47:11.560238   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 ssh -n multinode-817100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 cp multinode-817100-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile1248354530\001\cp-test_multinode-817100-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 ssh -n multinode-817100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 cp multinode-817100-m02:/home/docker/cp-test.txt multinode-817100:/home/docker/cp-test_multinode-817100-m02_multinode-817100.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 ssh -n multinode-817100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 ssh -n multinode-817100 "sudo cat /home/docker/cp-test_multinode-817100-m02_multinode-817100.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 cp multinode-817100-m02:/home/docker/cp-test.txt multinode-817100-m03:/home/docker/cp-test_multinode-817100-m02_multinode-817100-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 ssh -n multinode-817100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 ssh -n multinode-817100-m03 "sudo cat /home/docker/cp-test_multinode-817100-m02_multinode-817100-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 cp testdata\cp-test.txt multinode-817100-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 ssh -n multinode-817100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 cp multinode-817100-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile1248354530\001\cp-test_multinode-817100-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 ssh -n multinode-817100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 cp multinode-817100-m03:/home/docker/cp-test.txt multinode-817100:/home/docker/cp-test_multinode-817100-m03_multinode-817100.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 ssh -n multinode-817100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 ssh -n multinode-817100 "sudo cat /home/docker/cp-test_multinode-817100-m03_multinode-817100.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 cp multinode-817100-m03:/home/docker/cp-test.txt multinode-817100-m02:/home/docker/cp-test_multinode-817100-m03_multinode-817100-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 ssh -n multinode-817100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 ssh -n multinode-817100-m02 "sudo cat /home/docker/cp-test_multinode-817100-m03_multinode-817100-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (20.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-817100 node stop m03: (1.7867171s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-817100 status: exit status 7 (1.121684s)

                                                
                                                
-- stdout --
	multinode-817100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-817100-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-817100-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-817100 status --alsologtostderr: exit status 7 (1.0733669s)

                                                
                                                
-- stdout --
	multinode-817100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-817100-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-817100-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:47:26.514004    6560 out.go:360] Setting OutFile to fd 1836 ...
	I1027 19:47:26.557616    6560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:47:26.557616    6560 out.go:374] Setting ErrFile to fd 1420...
	I1027 19:47:26.557616    6560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:47:26.569158    6560 out.go:368] Setting JSON to false
	I1027 19:47:26.569158    6560 mustload.go:65] Loading cluster: multinode-817100
	I1027 19:47:26.569158    6560 notify.go:220] Checking for updates...
	I1027 19:47:26.569519    6560 config.go:182] Loaded profile config "multinode-817100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1027 19:47:26.569519    6560 status.go:174] checking status of multinode-817100 ...
	I1027 19:47:26.582115    6560 cli_runner.go:164] Run: docker container inspect multinode-817100 --format={{.State.Status}}
	I1027 19:47:26.637800    6560 status.go:371] multinode-817100 host status = "Running" (err=<nil>)
	I1027 19:47:26.637855    6560 host.go:66] Checking if "multinode-817100" exists ...
	I1027 19:47:26.644020    6560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-817100
	I1027 19:47:26.698971    6560 host.go:66] Checking if "multinode-817100" exists ...
	I1027 19:47:26.707100    6560 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:47:26.711941    6560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-817100
	I1027 19:47:26.769080    6560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56907 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-817100\id_rsa Username:docker}
	I1027 19:47:26.901810    6560 ssh_runner.go:195] Run: systemctl --version
	I1027 19:47:26.920305    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:47:26.946825    6560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-817100
	I1027 19:47:27.000471    6560 kubeconfig.go:125] found "multinode-817100" server: "https://127.0.0.1:56906"
	I1027 19:47:27.000471    6560 api_server.go:166] Checking apiserver status ...
	I1027 19:47:27.007465    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:47:27.035838    6560 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2314/cgroup
	I1027 19:47:27.048844    6560 api_server.go:182] apiserver freezer: "7:freezer:/docker/bbc2864405f35f9e6d8f32c2cae8547689b461bc6d606d892d12f9769c279ffd/kubepods/burstable/pod7b3ab70213665009e16df4d68f135f0f/9e60f3eb73702614d91a714f7ada5e1d314e58feb138cc502ce180b9221bbb2c"
	I1027 19:47:27.055839    6560 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bbc2864405f35f9e6d8f32c2cae8547689b461bc6d606d892d12f9769c279ffd/kubepods/burstable/pod7b3ab70213665009e16df4d68f135f0f/9e60f3eb73702614d91a714f7ada5e1d314e58feb138cc502ce180b9221bbb2c/freezer.state
	I1027 19:47:27.066837    6560 api_server.go:204] freezer state: "THAWED"
	I1027 19:47:27.066837    6560 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56906/healthz ...
	I1027 19:47:27.078376    6560 api_server.go:279] https://127.0.0.1:56906/healthz returned 200:
	ok
	I1027 19:47:27.078376    6560 status.go:463] multinode-817100 apiserver status = Running (err=<nil>)
	I1027 19:47:27.078376    6560 status.go:176] multinode-817100 status: &{Name:multinode-817100 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:47:27.078376    6560 status.go:174] checking status of multinode-817100-m02 ...
	I1027 19:47:27.090898    6560 cli_runner.go:164] Run: docker container inspect multinode-817100-m02 --format={{.State.Status}}
	I1027 19:47:27.144958    6560 status.go:371] multinode-817100-m02 host status = "Running" (err=<nil>)
	I1027 19:47:27.144958    6560 host.go:66] Checking if "multinode-817100-m02" exists ...
	I1027 19:47:27.150962    6560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-817100-m02
	I1027 19:47:27.201955    6560 host.go:66] Checking if "multinode-817100-m02" exists ...
	I1027 19:47:27.208956    6560 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:47:27.214956    6560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-817100-m02
	I1027 19:47:27.263956    6560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56954 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-817100-m02\id_rsa Username:docker}
	I1027 19:47:27.399244    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:47:27.419114    6560 status.go:176] multinode-817100-m02 status: &{Name:multinode-817100-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:47:27.419205    6560 status.go:174] checking status of multinode-817100-m03 ...
	I1027 19:47:27.430988    6560 cli_runner.go:164] Run: docker container inspect multinode-817100-m03 --format={{.State.Status}}
	I1027 19:47:27.484747    6560 status.go:371] multinode-817100-m03 host status = "Stopped" (err=<nil>)
	I1027 19:47:27.484747    6560 status.go:384] host is not running, skipping remaining checks
	I1027 19:47:27.484747    6560 status.go:176] multinode-817100-m03 status: &{Name:multinode-817100-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.98s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-817100 node start m03 -v=5 --alsologtostderr: (11.9955597s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 status -v=5 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-817100 status -v=5 --alsologtostderr: (1.4036374s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.53s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (89.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-817100
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-817100
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-817100: (24.8684962s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-817100 --wait=true -v=5 --alsologtostderr
E1027 19:48:55.618465   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-817100 --wait=true -v=5 --alsologtostderr: (1m4.5564336s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-817100
--- PASS: TestMultiNode/serial/RestartKeepsNodes (89.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (8.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-817100 node delete m03: (7.0615955s)
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 status --alsologtostderr
multinode_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-817100 status --alsologtostderr: (1.1143695s)
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (8.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 stop
multinode_test.go:345: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-817100 stop: (23.3612877s)
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-817100 status: exit status 7 (295.1518ms)

                                                
                                                
-- stdout --
	multinode-817100
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-817100-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-817100 status --alsologtostderr: exit status 7 (285.9724ms)

                                                
                                                
-- stdout --
	multinode-817100
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-817100-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:49:43.117039    6944 out.go:360] Setting OutFile to fd 1880 ...
	I1027 19:49:43.160499    6944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:49:43.160499    6944 out.go:374] Setting ErrFile to fd 1904...
	I1027 19:49:43.160499    6944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:49:43.169838    6944 out.go:368] Setting JSON to false
	I1027 19:49:43.169838    6944 mustload.go:65] Loading cluster: multinode-817100
	I1027 19:49:43.170838    6944 notify.go:220] Checking for updates...
	I1027 19:49:43.171007    6944 config.go:182] Loaded profile config "multinode-817100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1027 19:49:43.171007    6944 status.go:174] checking status of multinode-817100 ...
	I1027 19:49:43.183540    6944 cli_runner.go:164] Run: docker container inspect multinode-817100 --format={{.State.Status}}
	I1027 19:49:43.236166    6944 status.go:371] multinode-817100 host status = "Stopped" (err=<nil>)
	I1027 19:49:43.236166    6944 status.go:384] host is not running, skipping remaining checks
	I1027 19:49:43.236166    6944 status.go:176] multinode-817100 status: &{Name:multinode-817100 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:49:43.236166    6944 status.go:174] checking status of multinode-817100-m02 ...
	I1027 19:49:43.248996    6944 cli_runner.go:164] Run: docker container inspect multinode-817100-m02 --format={{.State.Status}}
	I1027 19:49:43.302685    6944 status.go:371] multinode-817100-m02 host status = "Stopped" (err=<nil>)
	I1027 19:49:43.302685    6944 status.go:384] host is not running, skipping remaining checks
	I1027 19:49:43.302685    6944 status.go:176] multinode-817100-m02 status: &{Name:multinode-817100-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.94s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (61.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-817100 --wait=true -v=5 --alsologtostderr --driver=docker
multinode_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-817100 --wait=true -v=5 --alsologtostderr --driver=docker: (1m0.181661s)
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817100 status --alsologtostderr
multinode_test.go:382: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-817100 status --alsologtostderr: (1.0195912s)
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (61.58s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (56.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-817100
multinode_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-817100-m02 --driver=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-817100-m02 --driver=docker: exit status 14 (210.3212ms)

                                                
                                                
-- stdout --
	* [multinode-817100-m02] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-817100-m02' is duplicated with machine name 'multinode-817100-m02' in profile 'multinode-817100'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-817100-m03 --driver=docker
multinode_test.go:472: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-817100-m03 --driver=docker: (51.2857716s)
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-817100
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-817100: exit status 80 (679.643ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-817100 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-817100-m03 already exists in multinode-817100-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_node_6ccce2fc44e3bb58d6c4f91e09ae7c7eaaf65535_25.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-817100-m03
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-817100-m03: (3.7413938s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (56.07s)

                                                
                                    
x
+
TestPreload (167.81s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-234600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.32.0
E1027 19:52:11.563723   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-234600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.32.0: (1m38.6072837s)
preload_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-234600 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-234600 image pull gcr.io/k8s-minikube/busybox: (2.1434738s)
preload_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-234600
preload_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-234600: (6.8100414s)
preload_test.go:65: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-234600 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker
E1027 19:53:55.621785   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-234600 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker: (56.1429215s)
preload_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-234600 image list
helpers_test.go:175: Cleaning up "test-preload-234600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-234600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-234600: (3.6202704s)
--- PASS: TestPreload (167.81s)

                                                
                                    
x
+
TestScheduledStopWindows (115.29s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-954500 --memory=3072 --driver=docker
E1027 19:55:14.640092   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-954500 --memory=3072 --driver=docker: (48.7822274s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-954500 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-954500 --schedule 5m: (1.0636534s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-954500 -n scheduled-stop-954500
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-954500 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-954500 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-954500 --schedule 5s: (1.0995994s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-954500
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-954500: exit status 7 (232.7307ms)

                                                
                                                
-- stdout --
	scheduled-stop-954500
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-954500 -n scheduled-stop-954500
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-954500 -n scheduled-stop-954500: exit status 7 (222.6192ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-954500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-954500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-954500: (2.5143834s)
--- PASS: TestScheduledStopWindows (115.29s)

                                                
                                    
x
+
TestInsufficientStorage (32.17s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-810100 --memory=3072 --output=json --wait=true --driver=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-810100 --memory=3072 --output=json --wait=true --driver=docker: exit status 26 (28.2519014s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2f463477-cb5e-49de-b26b-dcea112b1117","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-810100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7c7e1fcf-3330-4cb5-a7c0-74b54f7f1236","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"5312ad8d-3e7f-40a6-a74f-04c5fd134d31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9553a501-2bcc-4265-a32e-8f8443ac9c3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"0090fa44-c060-478f-9d8a-a8f1c5bd39fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21801"}}
	{"specversion":"1.0","id":"e670bfe4-6979-4b54-924c-de12e6fc646e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cebd358a-a755-4349-8533-25398c64d650","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ca309262-cdba-4130-b195-029377b99a9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"35d17896-d5de-4776-ab5d-571d08612375","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d986475f-0367-4aaf-ae54-e6689fdef37d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"738b2e5b-8629-4165-9ed9-060b6c8b0ee8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-810100\" primary control-plane node in \"insufficient-storage-810100\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"804b89b2-1357-4e8b-bac6-61009254e6dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b9b82b0b-3eb6-4d6c-82a7-c474d408d273","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"5db4d837-a67b-4bc2-8e44-f0be9f3876cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-810100 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-810100 --output=json --layout=cluster: exit status 7 (619.537ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-810100","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-810100","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1027 19:56:59.385954   10468 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-810100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-810100 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-810100 --output=json --layout=cluster: exit status 7 (608.2638ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-810100","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-810100","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1027 19:57:00.001095    3120 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-810100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	E1027 19:57:00.025080    3120 status.go:258] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\insufficient-storage-810100\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-810100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-810100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-810100: (2.6841378s)
--- PASS: TestInsufficientStorage (32.17s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (115.4s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.32.0.114799078.exe start -p running-upgrade-975300 --memory=3072 --vm-driver=docker
E1027 19:58:55.625937   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.32.0.114799078.exe start -p running-upgrade-975300 --memory=3072 --vm-driver=docker: (56.6696055s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-975300 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-975300 --memory=3072 --alsologtostderr -v=1 --driver=docker: (54.4814585s)
helpers_test.go:175: Cleaning up "running-upgrade-975300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-975300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-975300: (3.3764589s)
--- PASS: TestRunningBinaryUpgrade (115.40s)

                                                
                                    
x
+
TestKubernetesUpgrade (439.9s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-419600 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-419600 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker: (1m1.66796s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-419600
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-419600: (3.1062981s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-419600 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-419600 status --format={{.Host}}: exit status 7 (221.64ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-419600 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-419600 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker: (4m57.416946s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-419600 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-419600 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-419600 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker: exit status 106 (236.5893ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-419600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-419600
	    minikube start -p kubernetes-upgrade-419600 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4196002 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-419600 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-419600 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-419600 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker: (52.9138523s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-419600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-419600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-419600: (24.1515278s)
--- PASS: TestKubernetesUpgrade (439.90s)

                                                
                                    
x
+
TestMissingContainerUpgrade (137.28s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.32.0.1808363481.exe start -p missing-upgrade-675300 --memory=3072 --driver=docker
version_upgrade_test.go:309: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.32.0.1808363481.exe start -p missing-upgrade-675300 --memory=3072 --driver=docker: (57.9773166s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-675300
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-675300: (5.8989251s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-675300
version_upgrade_test.go:323: (dbg) Done: docker rm missing-upgrade-675300: (1.9593832s)
version_upgrade_test.go:329: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-675300 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-675300 --memory=3072 --alsologtostderr -v=1 --driver=docker: (1m7.092352s)
helpers_test.go:175: Cleaning up "missing-upgrade-675300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-675300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-675300: (3.6782996s)
--- PASS: TestMissingContainerUpgrade (137.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-066700 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-066700 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker: exit status 14 (298.9865ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-066700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6456 Build 19045.6456
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (101.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-066700 --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:97: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-066700 --memory=3072 --alsologtostderr -v=5 --driver=docker: (1m41.1915833s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-066700 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (101.90s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (172.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.32.0.2808771772.exe start -p stopped-upgrade-066700 --memory=3072 --vm-driver=docker
E1027 19:57:11.567626   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:58:38.701754   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.32.0.2808771772.exe start -p stopped-upgrade-066700 --memory=3072 --vm-driver=docker: (2m9.9363733s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.32.0.2808771772.exe -p stopped-upgrade-066700 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.32.0.2808771772.exe -p stopped-upgrade-066700 stop: (15.3200527s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-066700 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-066700 --memory=3072 --alsologtostderr -v=1 --driver=docker: (27.5694278s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (172.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (26.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-066700 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-066700 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker: (23.0042537s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-066700 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-066700 status -o json: exit status 2 (676.5475ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-066700","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-066700
no_kubernetes_test.go:126: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-066700: (3.020673s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (26.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (19.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-066700 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:138: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-066700 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker: (19.3805601s)
--- PASS: TestNoKubernetes/serial/Start (19.38s)

                                                
                                    
x
+
TestPause/serial/Start (82.49s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-031800 --memory=3072 --install-addons=false --wait=all --driver=docker
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-031800 --memory=3072 --install-addons=false --wait=all --driver=docker: (1m22.4875742s)
--- PASS: TestPause/serial/Start (82.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-066700 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-066700 "sudo systemctl is-active --quiet service kubelet": exit status 1 (546.6279ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe profile list: (1.68202s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (1.7178146s)
--- PASS: TestNoKubernetes/serial/ProfileList (3.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-066700
no_kubernetes_test.go:160: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-066700: (2.0583382s)
--- PASS: TestNoKubernetes/serial/Stop (2.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (10.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-066700 --driver=docker
no_kubernetes_test.go:193: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-066700 --driver=docker: (10.7263415s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (10.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-066700 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-066700 "sudo systemctl is-active --quiet service kubelet": exit status 1 (587.5481ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-066700
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-066700: (2.9316174s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.93s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (89.51s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-031800 --alsologtostderr -v=1 --driver=docker
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-031800 --alsologtostderr -v=1 --driver=docker: (1m29.4897016s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (89.51s)

                                                
                                    
x
+
TestPause/serial/Pause (1.36s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-031800 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-031800 --alsologtostderr -v=5: (1.3580682s)
--- PASS: TestPause/serial/Pause (1.36s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.69s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-031800 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-031800 --output=json --layout=cluster: exit status 2 (687.7236ms)

                                                
                                                
-- stdout --
	{"Name":"pause-031800","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-031800","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.69s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.52s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-031800 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-031800 --alsologtostderr -v=5: (1.5219477s)
--- PASS: TestPause/serial/Unpause (1.52s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.52s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-031800 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-031800 --alsologtostderr -v=5: (1.5168229s)
--- PASS: TestPause/serial/PauseAgain (1.52s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (12.77s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-031800 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-031800 --alsologtostderr -v=5: (12.7664762s)
--- PASS: TestPause/serial/DeletePaused (12.77s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.92s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.72435s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-031800
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-031800: exit status 1 (56.0082ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-031800: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (1.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (76.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-853800 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-853800 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0: (1m16.9555905s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (76.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (116.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-464700 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-464700 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.34.1: (1m56.2458909s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (116.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-853800 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [306869a1-0855-457b-9ac6-5c36af155d36] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [306869a1-0855-457b-9ac6-5c36af155d36] Running
E1027 20:03:55.629823   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.0067973s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-853800 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-853800 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-853800 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.4181982s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-853800 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (14.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-853800 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-853800 --alsologtostderr -v=3: (14.0511119s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (14.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-853800 -n old-k8s-version-853800
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-853800 -n old-k8s-version-853800: exit status 7 (225.5673ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-853800 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (33.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-853800 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-853800 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0: (32.6641833s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-853800 -n old-k8s-version-853800
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (33.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (23.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-2lrff" [01bc16e6-7834-45ee-9ad1-b9e60ec70008] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-2lrff" [01bc16e6-7834-45ee-9ad1-b9e60ec70008] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 23.0073563s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (23.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-2lrff" [01bc16e6-7834-45ee-9ad1-b9e60ec70008] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0157509s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-853800 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-853800 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-853800 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-853800 --alsologtostderr -v=1: (1.266178s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-853800 -n old-k8s-version-853800
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-853800 -n old-k8s-version-853800: exit status 2 (723.7013ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-853800 -n old-k8s-version-853800
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-853800 -n old-k8s-version-853800: exit status 2 (718.3239ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-853800 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-853800 --alsologtostderr -v=1: (1.1358969s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-853800 -n old-k8s-version-853800
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-853800 -n old-k8s-version-853800: (1.0123385s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-853800 -n old-k8s-version-853800
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (5.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-464700 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [78d340fe-e8c2-4b37-9af9-9b1f44982d65] Pending
helpers_test.go:352: "busybox" [78d340fe-e8c2-4b37-9af9-9b1f44982d65] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [78d340fe-e8c2-4b37-9af9-9b1f44982d65] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.0062272s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-464700 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (96.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-036500 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-036500 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.1: (1m36.5861103s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (96.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (91.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-892000 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-892000 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.1: (1m31.5107144s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (91.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-464700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-464700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.6290794s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-464700 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (20.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-464700 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-464700 --alsologtostderr -v=3: (20.0265831s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (20.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-464700 -n no-preload-464700
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-464700 -n no-preload-464700: exit status 7 (251.0033ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-464700 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (60.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-464700 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-464700 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.34.1: (59.6077652s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-464700 -n no-preload-464700
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (60.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dw9ts" [75f9801c-6e53-47b9-bd34-caed7406789f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0051069s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dw9ts" [75f9801c-6e53-47b9-bd34-caed7406789f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0049957s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-464700 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-892000 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [95d9b434-f3c6-4fa9-a680-e749b3ede4da] Pending
helpers_test.go:352: "busybox" [95d9b434-f3c6-4fa9-a680-e749b3ede4da] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [95d9b434-f3c6-4fa9-a680-e749b3ede4da] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.006966s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-892000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-036500 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8f7ccc88-8d12-4d4f-bdf0-681f3cbbf48a] Pending
helpers_test.go:352: "busybox" [8f7ccc88-8d12-4d4f-bdf0-681f3cbbf48a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8f7ccc88-8d12-4d4f-bdf0-681f3cbbf48a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.0100672s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-036500 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-464700 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-464700 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-464700 --alsologtostderr -v=1: (1.2699166s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-464700 -n no-preload-464700
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-464700 -n no-preload-464700: exit status 2 (684.7533ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-464700 -n no-preload-464700
E1027 20:07:11.575387   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-536500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-464700 -n no-preload-464700: exit status 2 (655.2074ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-464700 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-464700 --alsologtostderr -v=1: (1.0230575s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-464700 -n no-preload-464700
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-464700 -n no-preload-464700
--- PASS: TestStartStop/group/no-preload/serial/Pause (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-892000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-892000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.3531991s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-892000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-892000 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-892000 --alsologtostderr -v=3: (18.4248584s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-036500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-036500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.0420745s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-036500 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-036500 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-036500 --alsologtostderr -v=3: (16.2095492s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (62.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-791900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-791900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.34.1: (1m2.6978636s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (62.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 7 (227.6831ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-892000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-036500 -n embed-certs-036500
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-036500 -n embed-certs-036500: exit status 7 (239.0105ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-036500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-892000 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-892000 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.1: (57.5057898s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (59.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-036500 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-036500 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.1: (59.2411625s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-036500 -n embed-certs-036500
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (59.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-791900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-791900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.7999079s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-791900 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-791900 --alsologtostderr -v=3: (12.1913498s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pqh98" [6a72a5b1-1fee-458f-92b8-fb935847f9b9] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0112444s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-scwd2" [c6bc25c1-38be-4f36-ac5d-36fc8745756f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0082047s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-791900 -n newest-cni-791900
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-791900 -n newest-cni-791900: exit status 7 (214.9684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-791900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (27.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-791900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-791900 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.34.1: (26.6854065s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-791900 -n newest-cni-791900
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (27.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pqh98" [6a72a5b1-1fee-458f-92b8-fb935847f9b9] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.007005s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-892000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-scwd2" [c6bc25c1-38be-4f36-ac5d-36fc8745756f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0083451s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-036500 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-892000 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-892000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-892000 --alsologtostderr -v=1: (1.1858171s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 2 (729.242ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000: exit status 2 (738.8676ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-892000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-892000 --alsologtostderr -v=1: (1.0933163s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-892000 -n default-k8s-diff-port-892000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-036500 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-036500 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-036500 --alsologtostderr -v=1: (1.45683s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-036500 -n embed-certs-036500
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-036500 -n embed-certs-036500: exit status 2 (747.4385ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-036500 -n embed-certs-036500
E1027 20:08:49.784578   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-853800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:08:49.792607   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-853800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:08:49.804884   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-853800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:08:49.827589   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-853800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:08:49.869088   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-853800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:08:49.951275   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-853800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:08:50.113690   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-853800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-036500 -n embed-certs-036500: exit status 2 (724.4933ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-036500 --alsologtostderr -v=1
E1027 20:08:50.435310   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-853800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:08:51.077863   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-853800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-036500 --alsologtostderr -v=1: (1.2570714s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-036500 -n embed-certs-036500
E1027 20:08:52.360897   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-853800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-036500 -n embed-certs-036500
--- PASS: TestStartStop/group/embed-certs/serial/Pause (5.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (98.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-938400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-938400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker: (1m38.4030643s)
--- PASS: TestNetworkPlugins/group/auto/Start (98.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (114.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-938400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker
E1027 20:09:00.046901   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-853800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-938400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker: (1m54.4112855s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (114.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-791900 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (118.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-938400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker
E1027 20:09:30.770812   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-853800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-938400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker: (1m58.2296503s)
--- PASS: TestNetworkPlugins/group/calico/Start (118.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (95.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-938400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker
E1027 20:10:11.734053   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-853800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:22.276710   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-464700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:22.285708   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-464700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:22.298691   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-464700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:22.321696   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-464700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:22.364710   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-464700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:22.447703   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-464700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:22.610716   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-464700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:22.933829   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-464700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:23.576043   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-464700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:24.858674   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-464700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:27.421072   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-464700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:32.543150   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-464700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-938400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker: (1m35.3864738s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (95.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-938400 "pgrep -a kubelet"
I1027 20:10:36.580414   10564 config.go:182] Loaded profile config "auto-938400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (17.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-938400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-plvsx" [732d12fa-591c-4b84-8b7d-ee62a868a748] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1027 20:10:42.785733   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-464700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-plvsx" [732d12fa-591c-4b84-8b7d-ee62a868a748] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 17.0067583s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (17.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-97sf2" [13d616f7-ce57-43ae-b434-d4d29483b3bc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0059669s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-938400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-938400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-938400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-938400 "pgrep -a kubelet"
I1027 20:11:00.192241   10564 config.go:182] Loaded profile config "kindnet-938400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (20.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-938400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context kindnet-938400 replace --force -f testdata\netcat-deployment.yaml: (1.0621446s)
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-v655x" [7fa5d836-1b01-41f4-9631-0f5c0496a267] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1027 20:11:03.268498   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-464700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-v655x" [7fa5d836-1b01-41f4-9631-0f5c0496a267] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 19.0094261s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (20.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-938400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-938400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-938400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-938400 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-9tc7c" [0f20c983-5f69-409d-90e1-2451646860f2] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1027 20:11:26.639179   10564 config.go:182] Loaded profile config "custom-flannel-938400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
helpers_test.go:352: "calico-node-9tc7c" [0f20c983-5f69-409d-90e1-2451646860f2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.0084078s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (16.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-938400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kk68x" [29dc0773-cedb-4bb0-aa6f-dc227047545f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kk68x" [29dc0773-cedb-4bb0-aa6f-dc227047545f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 15.0065011s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (16.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-938400 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (109.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-938400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker
I1027 20:11:33.126835   10564 config.go:182] Loaded profile config "calico-938400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-938400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker: (1m49.0291709s)
--- PASS: TestNetworkPlugins/group/false/Start (109.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (27.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-938400 replace --force -f testdata\netcat-deployment.yaml
E1027 20:11:33.658046   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-853800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-btfbf" [df24d7dc-a8fc-44e8-b0da-3a3db46c229e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-btfbf" [df24d7dc-a8fc-44e8-b0da-3a3db46c229e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 27.0071275s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (27.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-938400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-938400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-938400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (100.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-938400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-938400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker: (1m40.895967s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (100.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-938400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-938400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-938400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (82.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-938400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker
E1027 20:12:24.221985   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-892000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-938400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker: (1m22.732455s)
--- PASS: TestNetworkPlugins/group/flannel/Start (82.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-938400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker
E1027 20:12:44.704579   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-892000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:13:06.154637   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\no-preload-464700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-938400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker: (1m20.9113128s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-938400 "pgrep -a kubelet"
I1027 20:13:22.199504   10564 config.go:182] Loaded profile config "false-938400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (14.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-938400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-p7bwh" [762ab337-728e-4312-aa3a-bf2c25724eb7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1027 20:13:25.667690   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-892000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-p7bwh" [762ab337-728e-4312-aa3a-bf2c25724eb7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 14.0063522s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (14.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-938400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-938400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-938400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-938400 "pgrep -a kubelet"
I1027 20:13:40.709111   10564 config.go:182] Loaded profile config "enable-default-cni-938400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-938400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5hzh8" [770b8679-0495-4abb-b2c2-fdc6815ac786] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5hzh8" [770b8679-0495-4abb-b2c2-fdc6815ac786] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 15.0108268s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-p66k7" [df22afe6-6acb-42b2-9a97-8c0daffad402] Running
E1027 20:13:49.789417   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-853800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.008102s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-938400 "pgrep -a kubelet"
I1027 20:13:52.252539   10564 config.go:182] Loaded profile config "flannel-938400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-938400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mn5cl" [407e3ce4-d4a2-45f6-a905-393dfb7180c6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1027 20:13:55.637538   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-057200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-mn5cl" [407e3ce4-d4a2-45f6-a905-393dfb7180c6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.0089256s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-938400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-938400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-938400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-938400 "pgrep -a kubelet"
I1027 20:14:04.777390   10564 config.go:182] Loaded profile config "bridge-938400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (15.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-938400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-d4vr8" [d460ea7f-0d25-4c4b-b755-546430d26c44] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-d4vr8" [d460ea7f-0d25-4c4b-b755-546430d26c44] Running
E1027 20:14:17.502777   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-853800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 15.009249s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (15.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-938400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-938400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-938400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (103.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-938400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-938400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker: (1m43.6426725s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (103.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-938400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-938400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-938400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-938400 "pgrep -a kubelet"
I1027 20:15:58.095747   10564 config.go:182] Loaded profile config "kubenet-938400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (14.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-938400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7lv8z" [71a9b8de-b699-439c-9cdd-ff1a3e633e17] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1027 20:15:58.604718   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-938400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:16:03.726849   10564 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-938400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-7lv8z" [71a9b8de-b699-439c-9cdd-ff1a3e633e17] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 14.0077425s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (14.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-938400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-938400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-938400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.20s)

                                                
                                    

Test skip (27/344)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (28.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 9.1864ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-d95hc" [eddff959-983e-4c45-a7ab-e99bf90c4d42] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006657s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-hwlws" [17a66fa7-a52a-4cc2-ae53-0169881e3444] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.0398866s
addons_test.go:392: (dbg) Run:  kubectl --context addons-057200 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-057200 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-057200 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (15.2762916s)
addons_test.go:407: Unable to complete rest of the test due to connectivity assumptions
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-057200 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-057200 addons disable registry --alsologtostderr -v=1: (1.2332534s)
--- SKIP: TestAddons/parallel/Registry (28.80s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (27.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-057200 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-057200 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-057200 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [7b471054-ce82-4702-95b5-6ea410368458] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [7b471054-ce82-4702-95b5-6ea410368458] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.014534s
I1027 19:05:22.976841   10564 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-057200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: skipping ingress DNS test for any combination that needs port forwarding
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-057200 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-057200 addons disable ingress-dns --alsologtostderr -v=1: (2.6949995s)
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-057200 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-057200 addons disable ingress --alsologtostderr -v=1: (8.7410185s)
--- SKIP: TestAddons/parallel/Ingress (27.54s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-536500 --alsologtostderr -v=1]
functional_test.go:931: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-536500 --alsologtostderr -v=1] ...
helpers_test.go:519: unable to terminate pid 1116: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (48.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-536500 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-536500 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-n29zd" [83ff2a9a-6d1b-4d7a-96be-453443d0cf76] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-n29zd" [83ff2a9a-6d1b-4d7a-96be-453443d0cf76] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 48.0064407s
functional_test.go:1651: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (48.58s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:34: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-088000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-088000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-938400 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-938400

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-938400

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-938400

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-938400

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-938400

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-938400

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-938400

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-938400

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-938400

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-938400

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-938400

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-938400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-938400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-938400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-938400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-938400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-938400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-938400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-938400" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-938400

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-938400

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-938400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-938400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-938400

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-938400

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-938400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-938400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-938400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-938400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-938400" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-938400

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-938400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-938400"

                                                
                                                
----------------------- debugLogs end: cilium-938400 [took: 10.8624608s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-938400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-938400
--- SKIP: TestNetworkPlugins/group/cilium (11.37s)

                                                
                                    
Copied to clipboard