Test Report: Docker_Windows 17402

                    
                      0345e29c9722d30e510a1c9d39dac0f90ef33e97:2023-10-11:31404
                    
                

Test fail (3/314)

Order failed test Duration
52 TestErrorSpam/setup 74.23
83 TestFunctional/parallel/ConfigCmd 2.17
260 TestPause/serial/SecondStartNoReconfiguration 101.01
x
+
TestErrorSpam/setup (74.23s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-149300 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-149300 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 --driver=docker: (1m14.2267831s)
error_spam_test.go:96: unexpected stderr: "W1011 18:02:07.431673    2876 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube2\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-149300] minikube v1.31.2 on Microsoft Windows 10 Enterprise N 10.0.19045.3570 Build 19045.3570
- KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
- MINIKUBE_LOCATION=17402
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node nospam-149300 in cluster nospam-149300
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-149300" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W1011 18:02:07.431673    2876 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (74.23s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-420000 config unset cpus" to be -""- but got *"W1011 18:08:00.936244    8068 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube2\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-420000 config get cpus: exit status 14 (355.3579ms)

                                                
                                                
** stderr ** 
	W1011 18:08:01.401378    9656 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-420000 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W1011 18:08:01.401378    9656 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube2\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-420000 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W1011 18:08:01.757597    7824 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube2\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-420000 config get cpus" to be -""- but got *"W1011 18:08:02.107246    2304 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube2\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-420000 config unset cpus" to be -""- but got *"W1011 18:08:02.431547    1820 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube2\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-420000 config get cpus: exit status 14 (336.5179ms)

                                                
                                                
** stderr ** 
	W1011 18:08:02.775754    6136 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-420000 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W1011 18:08:02.775754    6136 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube2\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (2.17s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (101.01s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-375900 --alsologtostderr -v=1 --driver=docker
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-375900 --alsologtostderr -v=1 --driver=docker: (1m21.6248336s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-375900] minikube v1.31.2 on Microsoft Windows 10 Enterprise N 10.0.19045.3570 Build 19045.3570
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node pause-375900 in cluster pause-375900
	* Pulling base image ...
	* Updating the running docker "pause-375900" container ...
	* Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-375900" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 19:01:02.916680    5476 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1011 19:01:02.983911    5476 out.go:296] Setting OutFile to fd 1760 ...
	I1011 19:01:02.984503    5476 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1011 19:01:02.984503    5476 out.go:309] Setting ErrFile to fd 1748...
	I1011 19:01:02.984503    5476 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1011 19:01:03.009310    5476 out.go:303] Setting JSON to false
	I1011 19:01:03.011917    5476 start.go:128] hostinfo: {"hostname":"minikube2","uptime":4974,"bootTime":1697045888,"procs":154,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3570 Build 19045.3570","kernelVersion":"10.0.19045.3570 Build 19045.3570","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1011 19:01:03.011917    5476 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1011 19:01:03.018918    5476 out.go:177] * [pause-375900] minikube v1.31.2 on Microsoft Windows 10 Enterprise N 10.0.19045.3570 Build 19045.3570
	I1011 19:01:03.028036    5476 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1011 19:01:03.025075    5476 notify.go:220] Checking for updates...
	I1011 19:01:03.036779    5476 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 19:01:03.046070    5476 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1011 19:01:03.052152    5476 out.go:177]   - MINIKUBE_LOCATION=17402
	I1011 19:01:03.060282    5476 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 19:01:03.065234    5476 config.go:182] Loaded profile config "pause-375900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1011 19:01:03.067262    5476 driver.go:378] Setting default libvirt URI to qemu:///system
	I1011 19:01:03.328560    5476 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.1 (123237)
	I1011 19:01:03.336336    5476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 19:01:03.703314    5476 info.go:266] docker info: {ID:fddc6918-7749-4ebe-a6e7-06311fb56dc1 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:true NGoroutines:80 SystemTime:2023-10-11 19:01:03.6497038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:
Docker Inc. Version:v0.2.20] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1011 19:01:03.710162    5476 out.go:177] * Using the docker driver based on existing profile
	I1011 19:01:03.716613    5476 start.go:298] selected driver: docker
	I1011 19:01:03.716739    5476 start.go:902] validating driver "docker" against &{Name:pause-375900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-375900 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false re
gistry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1011 19:01:03.716795    5476 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 19:01:03.729257    5476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 19:01:04.091252    5476 info.go:266] docker info: {ID:fddc6918-7749-4ebe-a6e7-06311fb56dc1 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:true NGoroutines:80 SystemTime:2023-10-11 19:01:04.0404102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:
Docker Inc. Version:v0.2.20] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1011 19:01:04.149582    5476 cni.go:84] Creating CNI manager for ""
	I1011 19:01:04.149582    5476 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 19:01:04.149582    5476 start_flags.go:323] config:
	{Name:pause-375900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-375900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:fal
se storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1011 19:01:04.155593    5476 out.go:177] * Starting control plane node pause-375900 in cluster pause-375900
	I1011 19:01:04.161609    5476 cache.go:122] Beginning downloading kic base image for docker with docker
	I1011 19:01:04.166634    5476 out.go:177] * Pulling base image ...
	I1011 19:01:04.173651    5476 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1011 19:01:04.173651    5476 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1011 19:01:04.173651    5476 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1011 19:01:04.173651    5476 cache.go:57] Caching tarball of preloaded images
	I1011 19:01:04.173651    5476 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1011 19:01:04.174493    5476 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1011 19:01:04.174493    5476 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\pause-375900\config.json ...
	I1011 19:01:04.361123    5476 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1011 19:01:04.361308    5476 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1011 19:01:04.361308    5476 cache.go:195] Successfully downloaded all kic artifacts
	I1011 19:01:04.361476    5476 start.go:365] acquiring machines lock for pause-375900: {Name:mk50584c1fc2e419d1876b13b5856da06ffa62c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 19:01:04.361641    5476 start.go:369] acquired machines lock for "pause-375900" in 165.6µs
	I1011 19:01:04.361820    5476 start.go:96] Skipping create...Using existing machine configuration
	I1011 19:01:04.361820    5476 fix.go:54] fixHost starting: 
	I1011 19:01:04.375885    5476 cli_runner.go:164] Run: docker container inspect pause-375900 --format={{.State.Status}}
	I1011 19:01:04.547185    5476 fix.go:102] recreateIfNeeded on pause-375900: state=Running err=<nil>
	W1011 19:01:04.547185    5476 fix.go:128] unexpected machine state, will restart: <nil>
	I1011 19:01:04.551981    5476 out.go:177] * Updating the running docker "pause-375900" container ...
	I1011 19:01:04.559100    5476 machine.go:88] provisioning docker machine ...
	I1011 19:01:04.559100    5476 ubuntu.go:169] provisioning hostname "pause-375900"
	I1011 19:01:04.565481    5476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375900
	I1011 19:01:04.736463    5476 main.go:141] libmachine: Using SSH client type: native
	I1011 19:01:04.736463    5476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52535 <nil> <nil>}
	I1011 19:01:04.736463    5476 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-375900 && echo "pause-375900" | sudo tee /etc/hostname
	I1011 19:01:04.965597    5476 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-375900
	
	I1011 19:01:04.972634    5476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375900
	I1011 19:01:05.157228    5476 main.go:141] libmachine: Using SSH client type: native
	I1011 19:01:05.158226    5476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52535 <nil> <nil>}
	I1011 19:01:05.158226    5476 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-375900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-375900/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-375900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 19:01:05.363537    5476 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 19:01:05.363537    5476 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube2\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube2\minikube-integration\.minikube}
	I1011 19:01:05.363537    5476 ubuntu.go:177] setting up certificates
	I1011 19:01:05.363537    5476 provision.go:83] configureAuth start
	I1011 19:01:05.369526    5476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-375900
	I1011 19:01:05.562149    5476 provision.go:138] copyHostCerts
	I1011 19:01:05.562149    5476 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem, removing ...
	I1011 19:01:05.562149    5476 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cert.pem
	I1011 19:01:05.563034    5476 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1011 19:01:05.563922    5476 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem, removing ...
	I1011 19:01:05.563922    5476 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\key.pem
	I1011 19:01:05.564608    5476 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem (1675 bytes)
	I1011 19:01:05.565322    5476 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem, removing ...
	I1011 19:01:05.565322    5476 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.pem
	I1011 19:01:05.565977    5476 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1011 19:01:05.566900    5476 provision.go:112] generating server cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.pause-375900 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube pause-375900]
	I1011 19:01:05.671835    5476 provision.go:172] copyRemoteCerts
	I1011 19:01:05.680827    5476 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 19:01:05.686835    5476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375900
	I1011 19:01:05.860358    5476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52535 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\pause-375900\id_rsa Username:docker}
	I1011 19:01:06.011289    5476 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 19:01:06.084373    5476 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I1011 19:01:06.139448    5476 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 19:01:06.194795    5476 provision.go:86] duration metric: configureAuth took 831.2543ms
	I1011 19:01:06.194795    5476 ubuntu.go:193] setting minikube options for container-runtime
	I1011 19:01:06.195953    5476 config.go:182] Loaded profile config "pause-375900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1011 19:01:06.206306    5476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375900
	I1011 19:01:06.400043    5476 main.go:141] libmachine: Using SSH client type: native
	I1011 19:01:06.400828    5476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52535 <nil> <nil>}
	I1011 19:01:06.400828    5476 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1011 19:01:06.604378    5476 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1011 19:01:06.604378    5476 ubuntu.go:71] root file system type: overlay
	I1011 19:01:06.604378    5476 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1011 19:01:06.612361    5476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375900
	I1011 19:01:06.791121    5476 main.go:141] libmachine: Using SSH client type: native
	I1011 19:01:06.792118    5476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52535 <nil> <nil>}
	I1011 19:01:06.792118    5476 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1011 19:01:07.016264    5476 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1011 19:01:07.022265    5476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375900
	I1011 19:01:07.231986    5476 main.go:141] libmachine: Using SSH client type: native
	I1011 19:01:07.232140    5476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52535 <nil> <nil>}
	I1011 19:01:07.233074    5476 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1011 19:01:07.450984    5476 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 19:01:07.450984    5476 machine.go:91] provisioned docker machine in 2.8918708s
	I1011 19:01:07.450984    5476 start.go:300] post-start starting for "pause-375900" (driver="docker")
	I1011 19:01:07.450984    5476 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 19:01:07.467967    5476 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 19:01:07.474975    5476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375900
	I1011 19:01:07.662977    5476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52535 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\pause-375900\id_rsa Username:docker}
	I1011 19:01:07.819961    5476 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 19:01:07.828965    5476 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1011 19:01:07.828965    5476 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1011 19:01:07.828965    5476 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1011 19:01:07.828965    5476 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1011 19:01:07.828965    5476 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\addons for local assets ...
	I1011 19:01:07.828965    5476 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\files for local assets ...
	I1011 19:01:07.829967    5476 filesync.go:149] local asset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\15562.pem -> 15562.pem in /etc/ssl/certs
	I1011 19:01:07.842998    5476 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 19:01:07.868988    5476 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\15562.pem --> /etc/ssl/certs/15562.pem (1708 bytes)
	I1011 19:01:07.926018    5476 start.go:303] post-start completed in 475.0317ms
	I1011 19:01:07.937002    5476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 19:01:07.942999    5476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375900
	I1011 19:01:08.168530    5476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52535 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\pause-375900\id_rsa Username:docker}
	I1011 19:01:08.304503    5476 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1011 19:01:08.316504    5476 fix.go:56] fixHost completed within 3.9546654s
	I1011 19:01:08.316504    5476 start.go:83] releasing machines lock for "pause-375900", held for 3.9548445s
	I1011 19:01:08.325507    5476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-375900
	I1011 19:01:08.533119    5476 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 19:01:08.542111    5476 ssh_runner.go:195] Run: cat /version.json
	I1011 19:01:08.543119    5476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375900
	I1011 19:01:08.551127    5476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-375900
	I1011 19:01:08.756132    5476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52535 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\pause-375900\id_rsa Username:docker}
	I1011 19:01:08.785560    5476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52535 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\pause-375900\id_rsa Username:docker}
	I1011 19:01:09.122575    5476 ssh_runner.go:195] Run: systemctl --version
	I1011 19:01:09.142575    5476 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1011 19:01:09.178251    5476 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W1011 19:01:09.202228    5476 start.go:416] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I1011 19:01:09.214236    5476 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 19:01:09.234250    5476 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1011 19:01:09.234250    5476 start.go:472] detecting cgroup driver to use...
	I1011 19:01:09.234250    5476 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1011 19:01:09.235229    5476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 19:01:09.299239    5476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1011 19:01:09.337237    5476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1011 19:01:09.369226    5476 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1011 19:01:09.384230    5476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1011 19:01:09.428235    5476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1011 19:01:09.485228    5476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1011 19:01:09.519220    5476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1011 19:01:09.561263    5476 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 19:01:09.609218    5476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1011 19:01:09.650304    5476 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 19:01:09.694311    5476 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 19:01:09.729302    5476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 19:01:09.961874    5476 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1011 19:01:20.414137    5476 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.4522146s)
	I1011 19:01:20.415143    5476 start.go:472] detecting cgroup driver to use...
	I1011 19:01:20.415143    5476 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1011 19:01:20.429141    5476 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1011 19:01:20.470363    5476 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1011 19:01:20.488357    5476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1011 19:01:20.563385    5476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 19:01:20.630764    5476 ssh_runner.go:195] Run: which cri-dockerd
	I1011 19:01:20.653071    5476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1011 19:01:20.682064    5476 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1011 19:01:20.733061    5476 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1011 19:01:20.995008    5476 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1011 19:01:21.282054    5476 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1011 19:01:21.282054    5476 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1011 19:01:21.340007    5476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 19:01:21.703243    5476 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1011 19:01:22.739037    5476 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0357894s)
	I1011 19:01:22.756830    5476 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1011 19:01:23.013517    5476 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1011 19:01:23.221767    5476 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1011 19:01:23.416161    5476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 19:01:23.604694    5476 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1011 19:01:23.675694    5476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 19:01:24.000376    5476 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1011 19:01:24.313920    5476 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1011 19:01:24.327906    5476 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1011 19:01:24.359446    5476 start.go:540] Will wait 60s for crictl version
	I1011 19:01:24.376922    5476 ssh_runner.go:195] Run: which crictl
	I1011 19:01:24.409917    5476 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 19:01:24.656561    5476 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1011 19:01:24.668559    5476 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1011 19:01:24.731551    5476 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1011 19:01:24.797556    5476 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1011 19:01:24.803569    5476 cli_runner.go:164] Run: docker exec -t pause-375900 dig +short host.docker.internal
	I1011 19:01:25.247399    5476 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1011 19:01:25.268663    5476 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1011 19:01:25.293633    5476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-375900
	I1011 19:01:25.500993    5476 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1011 19:01:25.512009    5476 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1011 19:01:25.558236    5476 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1011 19:01:25.558335    5476 docker.go:619] Images already preloaded, skipping extraction
	I1011 19:01:25.571267    5476 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1011 19:01:25.619290    5476 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1011 19:01:25.619290    5476 cache_images.go:84] Images are preloaded, skipping loading
	I1011 19:01:25.625270    5476 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1011 19:01:25.767279    5476 cni.go:84] Creating CNI manager for ""
	I1011 19:01:25.768279    5476 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 19:01:25.768279    5476 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1011 19:01:25.768279    5476 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-375900 NodeName:pause-375900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 19:01:25.768279    5476 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "pause-375900"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 19:01:25.768279    5476 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=pause-375900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:pause-375900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1011 19:01:25.782262    5476 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1011 19:01:25.804267    5476 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 19:01:25.813262    5476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 19:01:25.836276    5476 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1011 19:01:25.888275    5476 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 19:01:25.929260    5476 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2095 bytes)
	I1011 19:01:25.994718    5476 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1011 19:01:26.006711    5476 certs.go:56] Setting up C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\pause-375900 for IP: 192.168.85.2
	I1011 19:01:26.006711    5476 certs.go:190] acquiring lock for shared ca certs: {Name:mka39b35711ce17aa627001b408a7adb2f266bbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:01:26.007725    5476 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key
	I1011 19:01:26.007725    5476 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key
	I1011 19:01:26.008722    5476 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\pause-375900\client.key
	I1011 19:01:26.008722    5476 certs.go:315] skipping minikube signed cert generation: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\pause-375900\apiserver.key.43b9df8c
	I1011 19:01:26.009714    5476 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\pause-375900\proxy-client.key
	I1011 19:01:26.010711    5476 certs.go:437] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\1556.pem (1338 bytes)
	W1011 19:01:26.010711    5476 certs.go:433] ignoring C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\1556_empty.pem, impossibly tiny 0 bytes
	I1011 19:01:26.010711    5476 certs.go:437] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1011 19:01:26.011714    5476 certs.go:437] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1011 19:01:26.011714    5476 certs.go:437] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1011 19:01:26.011714    5476 certs.go:437] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1011 19:01:26.011714    5476 certs.go:437] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\15562.pem (1708 bytes)
	I1011 19:01:26.013723    5476 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\pause-375900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1011 19:01:26.103107    5476 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\pause-375900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1011 19:01:26.225362    5476 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\pause-375900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 19:01:26.318373    5476 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\pause-375900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 19:01:26.391409    5476 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 19:01:26.462426    5476 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 19:01:26.574416    5476 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 19:01:26.663419    5476 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 19:01:26.882750    5476 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\15562.pem --> /usr/share/ca-certificates/15562.pem (1708 bytes)
	I1011 19:01:27.171435    5476 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 19:01:27.459290    5476 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\1556.pem --> /usr/share/ca-certificates/1556.pem (1338 bytes)
	I1011 19:01:27.768843    5476 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 19:01:28.087834    5476 ssh_runner.go:195] Run: openssl version
	I1011 19:01:28.179815    5476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15562.pem && ln -fs /usr/share/ca-certificates/15562.pem /etc/ssl/certs/15562.pem"
	I1011 19:01:28.380738    5476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15562.pem
	I1011 19:01:28.467991    5476 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 11 18:04 /usr/share/ca-certificates/15562.pem
	I1011 19:01:28.489982    5476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15562.pem
	I1011 19:01:28.679665    5476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15562.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 19:01:28.882652    5476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 19:01:29.075256    5476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 19:01:29.257498    5476 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 11 17:53 /usr/share/ca-certificates/minikubeCA.pem
	I1011 19:01:29.273495    5476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 19:01:29.390503    5476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 19:01:29.582765    5476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1556.pem && ln -fs /usr/share/ca-certificates/1556.pem /etc/ssl/certs/1556.pem"
	I1011 19:01:29.780811    5476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1556.pem
	I1011 19:01:29.895799    5476 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 11 18:04 /usr/share/ca-certificates/1556.pem
	I1011 19:01:29.907798    5476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1556.pem
	I1011 19:01:29.941808    5476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1556.pem /etc/ssl/certs/51391683.0"
	I1011 19:01:29.999732    5476 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1011 19:01:30.096110    5476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 19:01:30.189988    5476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 19:01:30.280167    5476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 19:01:30.390880    5476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 19:01:30.493914    5476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 19:01:30.585855    5476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 19:01:30.674607    5476 kubeadm.go:404] StartCluster: {Name:pause-375900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-375900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registr
y-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1011 19:01:30.687629    5476 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1011 19:01:31.085365    5476 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 19:01:31.275844    5476 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1011 19:01:31.275844    5476 kubeadm.go:636] restartCluster start
	I1011 19:01:31.292849    5476 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 19:01:31.460782    5476 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 19:01:31.473032    5476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-375900
	I1011 19:01:31.658610    5476 kubeconfig.go:92] found "pause-375900" server: "https://127.0.0.1:52534"
	I1011 19:01:31.662565    5476 kapi.go:59] client config for pause-375900: &rest.Config{Host:"https://127.0.0.1:52534", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\pause-375900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\pause-375900\\client.key", CAFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e44dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1011 19:01:31.679563    5476 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 19:01:31.861766    5476 api_server.go:166] Checking apiserver status ...
	I1011 19:01:31.881642    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:01:32.087203    5476 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5337/cgroup
	I1011 19:01:32.255490    5476 api_server.go:182] apiserver freezer: "7:freezer:/docker/2f79efe94b89c412d0a943e62476d06039d3dfc2b40217207963c94dd6629c00/kubepods/burstable/pod4136322af9be4223b3ac03c6ba991c3d/8dd1f463809addd6a8a63a2a72be57a1fdca6b45a38620652840ce6ef61759dc"
	I1011 19:01:32.270488    5476 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2f79efe94b89c412d0a943e62476d06039d3dfc2b40217207963c94dd6629c00/kubepods/burstable/pod4136322af9be4223b3ac03c6ba991c3d/8dd1f463809addd6a8a63a2a72be57a1fdca6b45a38620652840ce6ef61759dc/freezer.state
	I1011 19:01:32.362530    5476 api_server.go:204] freezer state: "THAWED"
	I1011 19:01:32.362530    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:01:35.162091    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 19:01:35.162091    5476 retry.go:31] will retry after 190.875443ms: https://127.0.0.1:52534/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 19:01:35.364917    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:01:35.470534    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:35.470652    5476 retry.go:31] will retry after 371.635072ms: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:35.853539    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:01:35.869013    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:35.869013    5476 retry.go:31] will retry after 447.498125ms: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:36.328451    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:01:36.348859    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:36.348859    5476 retry.go:31] will retry after 391.195928ms: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:36.742156    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:01:36.756234    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:36.757203    5476 retry.go:31] will retry after 650.473562ms: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:37.409136    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:01:39.125342    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:39.125342    5476 retry.go:31] will retry after 851.309614ms: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:39.987825    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:01:40.005219    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:40.005539    5476 retry.go:31] will retry after 1.036481518s: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:41.061763    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:01:41.083786    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:41.083786    5476 retry.go:31] will retry after 1.251967696s: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:42.343088    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:01:44.361905    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:44.361990    5476 retry.go:31] will retry after 1.517209465s: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:45.884001    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:01:45.918915    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:45.918915    5476 kubeadm.go:611] needs reconfigure: apiserver error: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:45.918915    5476 kubeadm.go:1128] stopping kube-system containers ...
	I1011 19:01:45.926573    5476 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1011 19:01:45.985555    5476 docker.go:464] Stopping containers: [3a68e3e25c04 46a29adb775e 8cbb52bc4624 6ae2fd93692b e3fc1b46e1fe 8dd1f463809a ffd4e4805972 0f568baa0e86 7916b8c1fe0a b6ad1cd53788 2940af478bf4 19c537ee4c38 2723c4f657e7 5e12dae82588 c204959f5acc df774251868c 6945e4707c4d 039067db1d5f f6959012a17f 7da3ce2f52d8 ded1b9f0e8c8 92e3eeffa421]
	I1011 19:01:45.991576    5476 ssh_runner.go:195] Run: docker stop 3a68e3e25c04 46a29adb775e 8cbb52bc4624 6ae2fd93692b e3fc1b46e1fe 8dd1f463809a ffd4e4805972 0f568baa0e86 7916b8c1fe0a b6ad1cd53788 2940af478bf4 19c537ee4c38 2723c4f657e7 5e12dae82588 c204959f5acc df774251868c 6945e4707c4d 039067db1d5f f6959012a17f 7da3ce2f52d8 ded1b9f0e8c8 92e3eeffa421
	I1011 19:01:57.663053    5476 ssh_runner.go:235] Completed: docker stop 3a68e3e25c04 46a29adb775e 8cbb52bc4624 6ae2fd93692b e3fc1b46e1fe 8dd1f463809a ffd4e4805972 0f568baa0e86 7916b8c1fe0a b6ad1cd53788 2940af478bf4 19c537ee4c38 2723c4f657e7 5e12dae82588 c204959f5acc df774251868c 6945e4707c4d 039067db1d5f f6959012a17f 7da3ce2f52d8 ded1b9f0e8c8 92e3eeffa421: (11.6714234s)
	I1011 19:01:57.678496    5476 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 19:01:58.092441    5476 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 19:01:58.176305    5476 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Oct 11 19:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Oct 11 19:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Oct 11 19:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Oct 11 19:00 /etc/kubernetes/scheduler.conf
	
	I1011 19:01:58.191317    5476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 19:01:58.275299    5476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 19:01:58.425341    5476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 19:01:58.457331    5476 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1011 19:01:58.473314    5476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 19:01:58.512305    5476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 19:01:58.538307    5476 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1011 19:01:58.550306    5476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 19:01:58.593354    5476 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 19:01:58.619362    5476 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1011 19:01:58.619362    5476 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 19:01:58.844867    5476 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 19:02:00.133319    5476 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.2884459s)
	I1011 19:02:00.133319    5476 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 19:02:00.570573    5476 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 19:02:00.768689    5476 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 19:02:00.993166    5476 api_server.go:52] waiting for apiserver process to appear ...
	I1011 19:02:01.010109    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:01.162421    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:01.792140    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:02.294577    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:02.805479    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:03.297188    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:03.798871    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:04.290450    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:04.565797    5476 api_server.go:72] duration metric: took 3.5726139s to wait for apiserver process to appear ...
	I1011 19:02:04.565797    5476 api_server.go:88] waiting for apiserver healthz status ...
	I1011 19:02:04.565797    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:04.570848    5476 api_server.go:269] stopped: https://127.0.0.1:52534/healthz: Get "https://127.0.0.1:52534/healthz": EOF
	I1011 19:02:04.570848    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:04.574810    5476 api_server.go:269] stopped: https://127.0.0.1:52534/healthz: Get "https://127.0.0.1:52534/healthz": EOF
	I1011 19:02:05.090000    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:09.858527    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 19:02:09.859067    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 19:02:09.859067    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:10.154272    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:10.154272    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:10.154272    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:10.169841    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:10.169841    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:10.585851    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:10.598549    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:10.598549    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:11.075408    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:11.446265    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:11.446265    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:11.580523    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:11.595954    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:11.595954    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:12.086403    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:12.119892    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:12.119892    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:12.588448    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:12.664492    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:12.664492    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:13.077337    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:13.164113    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:13.164113    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:13.583348    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:13.664216    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:13.664216    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:14.088164    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:14.153505    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:14.153505    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:14.589228    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:14.601220    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 200:
	ok
	I1011 19:02:14.619236    5476 api_server.go:141] control plane version: v1.28.2
	I1011 19:02:14.619236    5476 api_server.go:131] duration metric: took 10.0533926s to wait for apiserver health ...
	I1011 19:02:14.619236    5476 cni.go:84] Creating CNI manager for ""
	I1011 19:02:14.619236    5476 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 19:02:14.622242    5476 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 19:02:14.633210    5476 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 19:02:14.655235    5476 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1011 19:02:14.699224    5476 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 19:02:14.714223    5476 system_pods.go:59] 6 kube-system pods found
	I1011 19:02:14.714223    5476 system_pods.go:61] "coredns-5dd5756b68-g2h9s" [6626c9fe-763e-46b0-a66a-5bd39e157d8d] Running
	I1011 19:02:14.714223    5476 system_pods.go:61] "etcd-pause-375900" [4bf74444-af82-451c-b1d3-36e322aebe0b] Running
	I1011 19:02:14.714223    5476 system_pods.go:61] "kube-apiserver-pause-375900" [4b6ca595-2579-4609-972b-3d352dbc9971] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1011 19:02:14.714223    5476 system_pods.go:61] "kube-controller-manager-pause-375900" [b6f72b4d-30c2-4679-9634-612b1e81dc5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1011 19:02:14.714223    5476 system_pods.go:61] "kube-proxy-6wv6x" [86829575-b97b-4960-a459-934aecb00dd5] Running
	I1011 19:02:14.714223    5476 system_pods.go:61] "kube-scheduler-pause-375900" [e9b9a50f-2ab2-414b-a8a4-51708cdfb4d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1011 19:02:14.714223    5476 system_pods.go:74] duration metric: took 14.9987ms to wait for pod list to return data ...
	I1011 19:02:14.714223    5476 node_conditions.go:102] verifying NodePressure condition ...
	I1011 19:02:14.748226    5476 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I1011 19:02:14.748226    5476 node_conditions.go:123] node cpu capacity is 16
	I1011 19:02:14.748226    5476 node_conditions.go:105] duration metric: took 34.0027ms to run NodePressure ...
	I1011 19:02:14.748226    5476 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 19:02:15.356351    5476 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1011 19:02:15.367367    5476 kubeadm.go:787] kubelet initialised
	I1011 19:02:15.367367    5476 kubeadm.go:788] duration metric: took 11.0154ms waiting for restarted kubelet to initialise ...
	I1011 19:02:15.367367    5476 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 19:02:15.379351    5476 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-g2h9s" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:15.395368    5476 pod_ready.go:92] pod "coredns-5dd5756b68-g2h9s" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:15.395368    5476 pod_ready.go:81] duration metric: took 16.0168ms waiting for pod "coredns-5dd5756b68-g2h9s" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:15.395368    5476 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:16.763545    5476 pod_ready.go:92] pod "etcd-pause-375900" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:16.763615    5476 pod_ready.go:81] duration metric: took 1.368241s waiting for pod "etcd-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:16.763615    5476 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:16.778930    5476 pod_ready.go:92] pod "kube-apiserver-pause-375900" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:16.778930    5476 pod_ready.go:81] duration metric: took 15.2525ms waiting for pod "kube-apiserver-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:16.778930    5476 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:18.873656    5476 pod_ready.go:102] pod "kube-controller-manager-pause-375900" in "kube-system" namespace has status "Ready":"False"
	I1011 19:02:21.371026    5476 pod_ready.go:92] pod "kube-controller-manager-pause-375900" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:21.371026    5476 pod_ready.go:81] duration metric: took 4.5920747s waiting for pod "kube-controller-manager-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.371026    5476 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6wv6x" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.391043    5476 pod_ready.go:92] pod "kube-proxy-6wv6x" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:21.391043    5476 pod_ready.go:81] duration metric: took 20.0173ms waiting for pod "kube-proxy-6wv6x" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.391043    5476 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.412018    5476 pod_ready.go:92] pod "kube-scheduler-pause-375900" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:21.412018    5476 pod_ready.go:81] duration metric: took 20.9746ms waiting for pod "kube-scheduler-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.412018    5476 pod_ready.go:38] duration metric: took 6.0446238s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 19:02:21.412018    5476 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 19:02:21.432014    5476 ops.go:34] apiserver oom_adj: -16
	I1011 19:02:21.432014    5476 kubeadm.go:640] restartCluster took 50.1559392s
	I1011 19:02:21.432014    5476 kubeadm.go:406] StartCluster complete in 50.7571735s
	I1011 19:02:21.432014    5476 settings.go:142] acquiring lock: {Name:mk9684611c6005d251a6ecf406b4611c2c1e30f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:02:21.432014    5476 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1011 19:02:21.433012    5476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\kubeconfig: {Name:mk7e72b8b9c82f9d87d6aed6af6962a1c1fa489d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:02:21.434012    5476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1011 19:02:21.434012    5476 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1011 19:02:21.440018    5476 out.go:177] * Enabled addons: 
	I1011 19:02:21.435016    5476 config.go:182] Loaded profile config "pause-375900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1011 19:02:21.444015    5476 addons.go:502] enable addons completed in 10.0033ms: enabled=[]
	I1011 19:02:21.448019    5476 kapi.go:59] client config for pause-375900: &rest.Config{Host:"https://127.0.0.1:52534", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\pause-375900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\pause-375900\\client.key", CAFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e44dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1011 19:02:21.457026    5476 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-375900" context rescaled to 1 replicas
	I1011 19:02:21.457026    5476 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 19:02:21.462035    5476 out.go:177] * Verifying Kubernetes components...
	I1011 19:02:21.479022    5476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 19:02:21.616332    5476 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1011 19:02:21.627333    5476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-375900
	I1011 19:02:21.839597    5476 node_ready.go:35] waiting up to 6m0s for node "pause-375900" to be "Ready" ...
	I1011 19:02:21.850416    5476 node_ready.go:49] node "pause-375900" has status "Ready":"True"
	I1011 19:02:21.850509    5476 node_ready.go:38] duration metric: took 10.7886ms waiting for node "pause-375900" to be "Ready" ...
	I1011 19:02:21.850509    5476 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 19:02:21.864611    5476 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-g2h9s" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.885624    5476 pod_ready.go:92] pod "coredns-5dd5756b68-g2h9s" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:21.885624    5476 pod_ready.go:81] duration metric: took 21.0126ms waiting for pod "coredns-5dd5756b68-g2h9s" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.885624    5476 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.898610    5476 pod_ready.go:92] pod "etcd-pause-375900" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:21.898610    5476 pod_ready.go:81] duration metric: took 12.986ms waiting for pod "etcd-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.898610    5476 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:22.179285    5476 pod_ready.go:92] pod "kube-apiserver-pause-375900" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:22.179285    5476 pod_ready.go:81] duration metric: took 280.6738ms waiting for pod "kube-apiserver-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:22.179285    5476 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:22.565413    5476 pod_ready.go:92] pod "kube-controller-manager-pause-375900" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:22.566410    5476 pod_ready.go:81] duration metric: took 387.1239ms waiting for pod "kube-controller-manager-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:22.566410    5476 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6wv6x" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:22.977552    5476 pod_ready.go:92] pod "kube-proxy-6wv6x" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:22.977552    5476 pod_ready.go:81] duration metric: took 411.1397ms waiting for pod "kube-proxy-6wv6x" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:22.977552    5476 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:23.368906    5476 pod_ready.go:92] pod "kube-scheduler-pause-375900" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:23.369009    5476 pod_ready.go:81] duration metric: took 391.4553ms waiting for pod "kube-scheduler-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:23.369009    5476 pod_ready.go:38] duration metric: took 1.5184932s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 19:02:23.369072    5476 api_server.go:52] waiting for apiserver process to appear ...
	I1011 19:02:23.381249    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:23.423242    5476 api_server.go:72] duration metric: took 1.9662072s to wait for apiserver process to appear ...
	I1011 19:02:23.423242    5476 api_server.go:88] waiting for apiserver healthz status ...
	I1011 19:02:23.423242    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:23.438236    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 200:
	ok
	I1011 19:02:23.444880    5476 api_server.go:141] control plane version: v1.28.2
	I1011 19:02:23.445145    5476 api_server.go:131] duration metric: took 21.9033ms to wait for apiserver health ...
	I1011 19:02:23.445251    5476 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 19:02:23.580252    5476 system_pods.go:59] 6 kube-system pods found
	I1011 19:02:23.580252    5476 system_pods.go:61] "coredns-5dd5756b68-g2h9s" [6626c9fe-763e-46b0-a66a-5bd39e157d8d] Running
	I1011 19:02:23.580252    5476 system_pods.go:61] "etcd-pause-375900" [4bf74444-af82-451c-b1d3-36e322aebe0b] Running
	I1011 19:02:23.580252    5476 system_pods.go:61] "kube-apiserver-pause-375900" [4b6ca595-2579-4609-972b-3d352dbc9971] Running
	I1011 19:02:23.580252    5476 system_pods.go:61] "kube-controller-manager-pause-375900" [b6f72b4d-30c2-4679-9634-612b1e81dc5d] Running
	I1011 19:02:23.580252    5476 system_pods.go:61] "kube-proxy-6wv6x" [86829575-b97b-4960-a459-934aecb00dd5] Running
	I1011 19:02:23.580252    5476 system_pods.go:61] "kube-scheduler-pause-375900" [e9b9a50f-2ab2-414b-a8a4-51708cdfb4d4] Running
	I1011 19:02:23.580252    5476 system_pods.go:74] duration metric: took 134.9669ms to wait for pod list to return data ...
	I1011 19:02:23.580252    5476 default_sa.go:34] waiting for default service account to be created ...
	I1011 19:02:23.767255    5476 default_sa.go:45] found service account: "default"
	I1011 19:02:23.768271    5476 default_sa.go:55] duration metric: took 186.9941ms for default service account to be created ...
	I1011 19:02:23.768271    5476 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 19:02:23.974250    5476 system_pods.go:86] 6 kube-system pods found
	I1011 19:02:23.974250    5476 system_pods.go:89] "coredns-5dd5756b68-g2h9s" [6626c9fe-763e-46b0-a66a-5bd39e157d8d] Running
	I1011 19:02:23.974250    5476 system_pods.go:89] "etcd-pause-375900" [4bf74444-af82-451c-b1d3-36e322aebe0b] Running
	I1011 19:02:23.974250    5476 system_pods.go:89] "kube-apiserver-pause-375900" [4b6ca595-2579-4609-972b-3d352dbc9971] Running
	I1011 19:02:23.974250    5476 system_pods.go:89] "kube-controller-manager-pause-375900" [b6f72b4d-30c2-4679-9634-612b1e81dc5d] Running
	I1011 19:02:23.974250    5476 system_pods.go:89] "kube-proxy-6wv6x" [86829575-b97b-4960-a459-934aecb00dd5] Running
	I1011 19:02:23.974250    5476 system_pods.go:89] "kube-scheduler-pause-375900" [e9b9a50f-2ab2-414b-a8a4-51708cdfb4d4] Running
	I1011 19:02:23.974250    5476 system_pods.go:126] duration metric: took 205.9779ms to wait for k8s-apps to be running ...
	I1011 19:02:23.974250    5476 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 19:02:23.987285    5476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 19:02:24.018267    5476 system_svc.go:56] duration metric: took 44.017ms WaitForService to wait for kubelet.
	I1011 19:02:24.018267    5476 kubeadm.go:581] duration metric: took 2.5612295s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1011 19:02:24.018267    5476 node_conditions.go:102] verifying NodePressure condition ...
	I1011 19:02:24.176265    5476 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I1011 19:02:24.177257    5476 node_conditions.go:123] node cpu capacity is 16
	I1011 19:02:24.177257    5476 node_conditions.go:105] duration metric: took 158.9893ms to run NodePressure ...
	I1011 19:02:24.177257    5476 start.go:228] waiting for startup goroutines ...
	I1011 19:02:24.177257    5476 start.go:233] waiting for cluster config update ...
	I1011 19:02:24.177257    5476 start.go:242] writing updated cluster config ...
	I1011 19:02:24.194287    5476 ssh_runner.go:195] Run: rm -f paused
	I1011 19:02:24.357690    5476 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1011 19:02:24.361706    5476 out.go:177] * Done! kubectl is now configured to use "pause-375900" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-375900
helpers_test.go:235: (dbg) docker inspect pause-375900:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2f79efe94b89c412d0a943e62476d06039d3dfc2b40217207963c94dd6629c00",
	        "Created": "2023-10-11T18:59:23.0108246Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 223487,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-11T18:59:23.7085463Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:94671ba3754e2c6976414eaf20a0c7861a5d2f9fc631e1161e8ab0ded9062c52",
	        "ResolvConfPath": "/var/lib/docker/containers/2f79efe94b89c412d0a943e62476d06039d3dfc2b40217207963c94dd6629c00/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2f79efe94b89c412d0a943e62476d06039d3dfc2b40217207963c94dd6629c00/hostname",
	        "HostsPath": "/var/lib/docker/containers/2f79efe94b89c412d0a943e62476d06039d3dfc2b40217207963c94dd6629c00/hosts",
	        "LogPath": "/var/lib/docker/containers/2f79efe94b89c412d0a943e62476d06039d3dfc2b40217207963c94dd6629c00/2f79efe94b89c412d0a943e62476d06039d3dfc2b40217207963c94dd6629c00-json.log",
	        "Name": "/pause-375900",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-375900:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-375900",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6ad71f9dafc17ce1267f3b3e3222686fe47340dfa256b304581f78eaef6347c2-init/diff:/var/lib/docker/overlay2/6a818081599e04504e41e5c7d63b7e52f1ec769a66e42764d0a42ce267813803/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ad71f9dafc17ce1267f3b3e3222686fe47340dfa256b304581f78eaef6347c2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ad71f9dafc17ce1267f3b3e3222686fe47340dfa256b304581f78eaef6347c2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ad71f9dafc17ce1267f3b3e3222686fe47340dfa256b304581f78eaef6347c2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-375900",
	                "Source": "/var/lib/docker/volumes/pause-375900/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-375900",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-375900",
	                "name.minikube.sigs.k8s.io": "pause-375900",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "259bda10ce091715fd41c6598f23ad9dcdc94890aebdf17fc6ee20a352f88bb4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52535"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52536"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52537"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52533"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52534"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/259bda10ce09",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-375900": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2f79efe94b89",
	                        "pause-375900"
	                    ],
	                    "NetworkID": "73597507752d28f00176e111983f096c5dbcf3c5c87d646a205ffcace72b7fe9",
	                    "EndpointID": "c536ec0913e823bbcfa8a7c3fd544d890760234ad9b473c74526f2236a867914",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-375900 -n pause-375900
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-375900 -n pause-375900: (1.5422186s)
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-375900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-375900 logs -n 25: (5.2939804s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                         Args                         |         Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p cilium-035800 sudo                                | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | systemctl cat docker                                 |                          |                   |         |                     |                     |
	|         | --no-pager                                           |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo cat                            | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | /etc/docker/daemon.json                              |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo docker                         | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | system info                                          |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo                                | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | systemctl status cri-docker                          |                          |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo                                | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | systemctl cat cri-docker                             |                          |                   |         |                     |                     |
	|         | --no-pager                                           |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo cat                            | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo cat                            | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo                                | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | cri-dockerd --version                                |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo                                | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | systemctl status containerd                          |                          |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo                                | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | systemctl cat containerd                             |                          |                   |         |                     |                     |
	|         | --no-pager                                           |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo cat                            | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo cat                            | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | /etc/containerd/config.toml                          |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo                                | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | containerd config dump                               |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo                                | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | systemctl status crio --all                          |                          |                   |         |                     |                     |
	|         | --full --no-pager                                    |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo                                | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo find                           | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                          |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo crio                           | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | config                                               |                          |                   |         |                     |                     |
	| delete  | -p cilium-035800                                     | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC | 11 Oct 23 19:01 UTC |
	| start   | -p force-systemd-env-769500                          | force-systemd-env-769500 | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | --memory=2048                                        |                          |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                               |                          |                   |         |                     |                     |
	|         | --driver=docker                                      |                          |                   |         |                     |                     |
	| ssh     | docker-flags-068100 ssh                              | docker-flags-068100      | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC | 11 Oct 23 19:01 UTC |
	|         | sudo systemctl show docker                           |                          |                   |         |                     |                     |
	|         | --property=Environment                               |                          |                   |         |                     |                     |
	|         | --no-pager                                           |                          |                   |         |                     |                     |
	| ssh     | docker-flags-068100 ssh                              | docker-flags-068100      | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC | 11 Oct 23 19:01 UTC |
	|         | sudo systemctl show docker                           |                          |                   |         |                     |                     |
	|         | --property=ExecStart                                 |                          |                   |         |                     |                     |
	|         | --no-pager                                           |                          |                   |         |                     |                     |
	| delete  | -p docker-flags-068100                               | docker-flags-068100      | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC | 11 Oct 23 19:01 UTC |
	| delete  | -p running-upgrade-051900                            | running-upgrade-051900   | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC | 11 Oct 23 19:01 UTC |
	| start   | -p old-k8s-version-796400                            | old-k8s-version-796400   | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | --memory=2200                                        |                          |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                          |                   |         |                     |                     |
	|         | --kvm-network=default                                |                          |                   |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                          |                   |         |                     |                     |
	|         | --disable-driver-mounts                              |                          |                   |         |                     |                     |
	|         | --keep-context=false                                 |                          |                   |         |                     |                     |
	|         | --driver=docker                                      |                          |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                          |                   |         |                     |                     |
	| start   | -p no-preload-517500                                 | no-preload-517500        | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | --memory=2200                                        |                          |                   |         |                     |                     |
	|         | --alsologtostderr                                    |                          |                   |         |                     |                     |
	|         | --wait=true --preload=false                          |                          |                   |         |                     |                     |
	|         | --driver=docker                                      |                          |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                         |                          |                   |         |                     |                     |
	|---------|------------------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/11 19:01:41
	Running on machine: minikube2
	Binary: Built with gc go1.21.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 19:01:41.300843    9448 out.go:296] Setting OutFile to fd 1860 ...
	I1011 19:01:41.301832    9448 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1011 19:01:41.301832    9448 out.go:309] Setting ErrFile to fd 1492...
	I1011 19:01:41.301832    9448 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1011 19:01:41.318835    9448 out.go:303] Setting JSON to false
	I1011 19:01:41.322837    9448 start.go:128] hostinfo: {"hostname":"minikube2","uptime":5012,"bootTime":1697045888,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3570 Build 19045.3570","kernelVersion":"10.0.19045.3570 Build 19045.3570","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1011 19:01:41.322837    9448 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1011 19:01:41.333883    9448 out.go:177] * [no-preload-517500] minikube v1.31.2 on Microsoft Windows 10 Enterprise N 10.0.19045.3570 Build 19045.3570
	I1011 19:01:41.340836    9448 notify.go:220] Checking for updates...
	I1011 19:01:41.344839    9448 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1011 19:01:41.351841    9448 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 19:01:41.358850    9448 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1011 19:01:41.364887    9448 out.go:177]   - MINIKUBE_LOCATION=17402
	I1011 19:01:41.371849    9448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 19:01:40.968814    1140 config.go:182] Loaded profile config "force-systemd-env-769500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1011 19:01:40.969777    1140 config.go:182] Loaded profile config "pause-375900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1011 19:01:40.969777    1140 config.go:182] Loaded profile config "running-upgrade-051900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1011 19:01:40.969777    1140 driver.go:378] Setting default libvirt URI to qemu:///system
	I1011 19:01:41.263869    1140 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.1 (123237)
	I1011 19:01:41.269857    1140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 19:01:41.640851    1140 info.go:266] docker info: {ID:fddc6918-7749-4ebe-a6e7-06311fb56dc1 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:82 SystemTime:2023-10-11 19:01:41.5891175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:
Docker Inc. Version:v0.2.20] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1011 19:01:41.647846    1140 out.go:177] * Using the docker driver based on user configuration
	I1011 19:01:41.376843    9448 config.go:182] Loaded profile config "force-systemd-env-769500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1011 19:01:41.376843    9448 config.go:182] Loaded profile config "pause-375900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1011 19:01:41.377834    9448 driver.go:378] Setting default libvirt URI to qemu:///system
	I1011 19:01:41.687830    9448 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.1 (123237)
	I1011 19:01:41.695835    9448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 19:01:42.110151    9448 info.go:266] docker info: {ID:fddc6918-7749-4ebe-a6e7-06311fb56dc1 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:82 SystemTime:2023-10-11 19:01:42.0547865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:
Docker Inc. Version:v0.2.20] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1011 19:01:42.117308    9448 out.go:177] * Using the docker driver based on user configuration
	I1011 19:01:41.659843    1140 start.go:298] selected driver: docker
	I1011 19:01:41.659843    1140 start.go:902] validating driver "docker" against <nil>
	I1011 19:01:41.659843    1140 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 19:01:41.734842    1140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 19:01:42.124985    1140 info.go:266] docker info: {ID:fddc6918-7749-4ebe-a6e7-06311fb56dc1 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:82 SystemTime:2023-10-11 19:01:42.0721018 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:
Docker Inc. Version:v0.2.20] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1011 19:01:42.124985    1140 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1011 19:01:42.125998    1140 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 19:01:42.132007    1140 out.go:177] * Using Docker Desktop driver with root privileges
	I1011 19:01:42.135987    1140 cni.go:84] Creating CNI manager for ""
	I1011 19:01:42.135987    1140 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1011 19:01:42.135987    1140 start_flags.go:323] config:
	{Name:old-k8s-version-796400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-796400 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1011 19:01:42.141010    1140 out.go:177] * Starting control plane node old-k8s-version-796400 in cluster old-k8s-version-796400
	I1011 19:01:42.147996    1140 cache.go:122] Beginning downloading kic base image for docker with docker
	I1011 19:01:42.153030    1140 out.go:177] * Pulling base image ...
	I1011 19:01:42.161193    1140 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1011 19:01:42.161193    1140 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1011 19:01:42.161820    1140 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1011 19:01:42.161820    1140 cache.go:57] Caching tarball of preloaded images
	I1011 19:01:42.161820    1140 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1011 19:01:42.162365    1140 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1011 19:01:42.162607    1140 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-796400\config.json ...
	I1011 19:01:42.162712    1140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-796400\config.json: {Name:mk6b60692104ca563416dea5167fd5a5170d1dee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:01:42.374079    1140 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1011 19:01:42.374079    1140 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1011 19:01:42.374079    1140 cache.go:195] Successfully downloaded all kic artifacts
	I1011 19:01:42.374079    1140 start.go:365] acquiring machines lock for old-k8s-version-796400: {Name:mkc4efc9d363568ee54213729b0b3cd095a41f46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 19:01:42.374079    1140 start.go:369] acquired machines lock for "old-k8s-version-796400" in 0s
	I1011 19:01:42.374079    1140 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-796400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-796400 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 19:01:42.374079    1140 start.go:125] createHost starting for "" (driver="docker")
	I1011 19:01:42.122984    9448 start.go:298] selected driver: docker
	I1011 19:01:42.122984    9448 start.go:902] validating driver "docker" against <nil>
	I1011 19:01:42.122984    9448 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 19:01:42.185783    9448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 19:01:42.563235    9448 info.go:266] docker info: {ID:fddc6918-7749-4ebe-a6e7-06311fb56dc1 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:82 SystemTime:2023-10-11 19:01:42.5088286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:
Docker Inc. Version:v0.2.20] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1011 19:01:42.563551    9448 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1011 19:01:42.564961    9448 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 19:01:42.569581    9448 out.go:177] * Using Docker Desktop driver with root privileges
	I1011 19:01:42.573883    9448 cni.go:84] Creating CNI manager for ""
	I1011 19:01:42.573883    9448 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 19:01:42.573883    9448 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1011 19:01:42.573883    9448 start_flags.go:323] config:
	{Name:no-preload-517500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-517500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1011 19:01:42.577874    9448 out.go:177] * Starting control plane node no-preload-517500 in cluster no-preload-517500
	I1011 19:01:42.587861    9448 cache.go:122] Beginning downloading kic base image for docker with docker
	I1011 19:01:42.592479    9448 out.go:177] * Pulling base image ...
	I1011 19:01:42.599472    9448 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1011 19:01:42.599472    9448 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1011 19:01:42.599472    9448 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-517500\config.json ...
	I1011 19:01:42.600491    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1011 19:01:42.600491    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.28.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.28.2
	I1011 19:01:42.600491    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.9 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.9
	I1011 19:01:42.600491    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.28.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.28.2
	I1011 19:01:42.600491    9448 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-517500\config.json: {Name:mk429a522dde83a84625c193e3366eef8e10aa93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:01:42.600491    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.5.9-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.9-0
	I1011 19:01:42.600491    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.28.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.28.2
	I1011 19:01:42.600491    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.10.1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.10.1
	I1011 19:01:42.600491    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.28.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.28.2
	I1011 19:01:42.784983    9448 cache.go:107] acquiring lock: {Name:mke142abb3c6a2c41270574b7fb8a623109e608b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 19:01:42.785977    9448 cache.go:107] acquiring lock: {Name:mk4fb1c40f5f6719a0516143715f5e8d99ab233c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 19:01:42.785977    9448 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1011 19:01:42.785977    9448 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.2
	I1011 19:01:42.786985    9448 cache.go:107] acquiring lock: {Name:mk8dec1189f683ead1bd04bb2e1c85005d8ca37f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 19:01:42.786985    9448 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.2
	I1011 19:01:42.788022    9448 cache.go:107] acquiring lock: {Name:mk93ccdec90972c05247bea23df9b97c54ef0291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 19:01:42.788022    9448 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1011 19:01:42.788988    9448 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 188.4959ms
	I1011 19:01:42.788988    9448 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1011 19:01:42.791989    9448 cache.go:107] acquiring lock: {Name:mk9cc05e0ee5270b563134ba1bb3828ae0a31931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 19:01:42.792991    9448 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.2
	I1011 19:01:42.796983    9448 cache.go:107] acquiring lock: {Name:mk47b91a03ce6ebe82951e077a88bdcd37a4e865 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 19:01:42.796983    9448 cache.go:107] acquiring lock: {Name:mk7898ef7d3c0e6a2ac170399020a6163f90b713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 19:01:42.796983    9448 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1011 19:01:42.796983    9448 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1011 19:01:42.797986    9448 cache.go:107] acquiring lock: {Name:mkf3ae7199fe86f09763e1a10cce7a56654c6cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 19:01:42.799153    9448 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1011 19:01:42.801813    9448 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.2
	I1011 19:01:42.801813    9448 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1011 19:01:42.804608    9448 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.2
	I1011 19:01:42.810439    9448 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.2
	I1011 19:01:42.815448    9448 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1011 19:01:42.815448    9448 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1011 19:01:42.819423    9448 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1011 19:01:42.844430    9448 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1011 19:01:42.844430    9448 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1011 19:01:42.844430    9448 cache.go:195] Successfully downloaded all kic artifacts
	I1011 19:01:42.844430    9448 start.go:365] acquiring machines lock for no-preload-517500: {Name:mk805f68fd9169e44d973a163ab8af5ee8839274 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 19:01:42.844430    9448 start.go:369] acquired machines lock for "no-preload-517500" in 0s
	I1011 19:01:42.844430    9448 start.go:93] Provisioning new machine with config: &{Name:no-preload-517500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-517500 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 19:01:42.844430    9448 start.go:125] createHost starting for "" (driver="docker")
	I1011 19:01:39.125342    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:39.125342    5476 retry.go:31] will retry after 851.309614ms: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:39.987825    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:01:40.005219    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:40.005539    5476 retry.go:31] will retry after 1.036481518s: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:41.061763    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:01:41.083786    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:41.083786    5476 retry.go:31] will retry after 1.251967696s: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:42.343088    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:01:42.383079    1140 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1011 19:01:42.383079    1140 start.go:159] libmachine.API.Create for "old-k8s-version-796400" (driver="docker")
	I1011 19:01:42.383079    1140 client.go:168] LocalClient.Create starting
	I1011 19:01:42.384079    1140 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1011 19:01:42.384079    1140 main.go:141] libmachine: Decoding PEM data...
	I1011 19:01:42.384079    1140 main.go:141] libmachine: Parsing certificate...
	I1011 19:01:42.384079    1140 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1011 19:01:42.385081    1140 main.go:141] libmachine: Decoding PEM data...
	I1011 19:01:42.385081    1140 main.go:141] libmachine: Parsing certificate...
	I1011 19:01:42.398076    1140 cli_runner.go:164] Run: docker network inspect old-k8s-version-796400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1011 19:01:42.578866    1140 cli_runner.go:211] docker network inspect old-k8s-version-796400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1011 19:01:42.584868    1140 network_create.go:281] running [docker network inspect old-k8s-version-796400] to gather additional debugging logs...
	I1011 19:01:42.584868    1140 cli_runner.go:164] Run: docker network inspect old-k8s-version-796400
	W1011 19:01:42.812427    1140 cli_runner.go:211] docker network inspect old-k8s-version-796400 returned with exit code 1
	I1011 19:01:42.812427    1140 network_create.go:284] error running [docker network inspect old-k8s-version-796400]: docker network inspect old-k8s-version-796400: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-796400 not found
	I1011 19:01:42.812427    1140 network_create.go:286] output of [docker network inspect old-k8s-version-796400]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-796400 not found
	
	** /stderr **
	I1011 19:01:42.824425    1140 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1011 19:01:43.034418    1140 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1011 19:01:43.065416    1140 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1011 19:01:43.096417    1140 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1011 19:01:43.119420    1140 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002339950}
	I1011 19:01:43.119420    1140 network_create.go:124] attempt to create docker network old-k8s-version-796400 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1011 19:01:43.125424    1140 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-796400 old-k8s-version-796400
	I1011 19:01:42.854419    9448 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1011 19:01:42.854419    9448 start.go:159] libmachine.API.Create for "no-preload-517500" (driver="docker")
	I1011 19:01:42.854419    9448 client.go:168] LocalClient.Create starting
	I1011 19:01:42.855424    9448 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1011 19:01:42.855424    9448 main.go:141] libmachine: Decoding PEM data...
	I1011 19:01:42.855424    9448 main.go:141] libmachine: Parsing certificate...
	I1011 19:01:42.855424    9448 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1011 19:01:42.855424    9448 main.go:141] libmachine: Decoding PEM data...
	I1011 19:01:42.856429    9448 main.go:141] libmachine: Parsing certificate...
	I1011 19:01:42.864439    9448 cli_runner.go:164] Run: docker network inspect no-preload-517500 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1011 19:01:42.907432    9448 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.28.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1011 19:01:43.002426    9448 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.28.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1011 19:01:43.034418    9448 cli_runner.go:211] docker network inspect no-preload-517500 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1011 19:01:43.041435    9448 network_create.go:281] running [docker network inspect no-preload-517500] to gather additional debugging logs...
	I1011 19:01:43.041435    9448 cli_runner.go:164] Run: docker network inspect no-preload-517500
	W1011 19:01:43.096417    9448 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.28.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1011 19:01:43.192425    9448 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.28.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1011 19:01:43.207424    9448 cli_runner.go:211] docker network inspect no-preload-517500 returned with exit code 1
	I1011 19:01:43.207424    9448 network_create.go:284] error running [docker network inspect no-preload-517500]: docker network inspect no-preload-517500: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-517500 not found
	I1011 19:01:43.207424    9448 network_create.go:286] output of [docker network inspect no-preload-517500]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-517500 not found
	
	** /stderr **
	I1011 19:01:43.215429    9448 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1011 19:01:43.296426    9448 image.go:187] authn lookup for registry.k8s.io/coredns/coredns:v1.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1011 19:01:43.396477    9448 image.go:187] authn lookup for registry.k8s.io/pause:3.9 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1011 19:01:43.484440    9448 image.go:187] authn lookup for registry.k8s.io/etcd:3.5.9-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1011 19:01:43.591968    9448 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.28.2
	I1011 19:01:43.602262    9448 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.28.2
	I1011 19:01:43.608478    9448 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.28.2
	I1011 19:01:43.638969    9448 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.10.1
	I1011 19:01:43.658470    9448 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.28.2
	I1011 19:01:43.804903    9448 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.9
	I1011 19:01:43.857140    9448 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.9-0
	I1011 19:01:43.924468    9448 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.9 exists
	I1011 19:01:43.924468    9448 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.9" took 1.323971s
	I1011 19:01:43.924468    9448 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.9 succeeded
	I1011 19:01:44.248025    9448 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.10.1 exists
	I1011 19:01:44.248025    9448 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.10.1" took 1.6475262s
	I1011 19:01:44.248025    9448 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.10.1 succeeded
	I1011 19:01:44.508531    9448 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.28.2 exists
	I1011 19:01:44.508531    9448 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.2" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.28.2" took 1.9080315s
	I1011 19:01:44.508531    9448 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.28.2 succeeded
	I1011 19:01:45.264475    9448 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.28.2 exists
	I1011 19:01:45.265170    9448 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.2" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.28.2" took 2.6646672s
	I1011 19:01:45.265240    9448 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.28.2 succeeded
	I1011 19:01:45.553958    9448 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.28.2 exists
	I1011 19:01:45.554434    9448 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.2" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.28.2" took 2.9539295s
	I1011 19:01:45.554434    9448 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.28.2 succeeded
	I1011 19:01:45.795975    9448 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.28.2 exists
	I1011 19:01:45.795975    9448 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.2" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.28.2" took 3.1954694s
	I1011 19:01:45.795975    9448 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.28.2 succeeded
	I1011 19:01:45.901699    9448 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.9-0 exists
	I1011 19:01:45.901894    9448 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.5.9-0" took 3.3013882s
	I1011 19:01:45.901894    9448 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.9-0 succeeded
	I1011 19:01:45.901894    9448 cache.go:87] Successfully saved all images to host disk.
	I1011 19:01:44.361905    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:44.361990    5476 retry.go:31] will retry after 1.517209465s: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:45.884001    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:01:45.918915    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:45.918915    5476 kubeadm.go:611] needs reconfigure: apiserver error: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:45.918915    5476 kubeadm.go:1128] stopping kube-system containers ...
	I1011 19:01:45.926573    5476 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1011 19:01:45.985555    5476 docker.go:464] Stopping containers: [3a68e3e25c04 46a29adb775e 8cbb52bc4624 6ae2fd93692b e3fc1b46e1fe 8dd1f463809a ffd4e4805972 0f568baa0e86 7916b8c1fe0a b6ad1cd53788 2940af478bf4 19c537ee4c38 2723c4f657e7 5e12dae82588 c204959f5acc df774251868c 6945e4707c4d 039067db1d5f f6959012a17f 7da3ce2f52d8 ded1b9f0e8c8 92e3eeffa421]
	I1011 19:01:45.991576    5476 ssh_runner.go:195] Run: docker stop 3a68e3e25c04 46a29adb775e 8cbb52bc4624 6ae2fd93692b e3fc1b46e1fe 8dd1f463809a ffd4e4805972 0f568baa0e86 7916b8c1fe0a b6ad1cd53788 2940af478bf4 19c537ee4c38 2723c4f657e7 5e12dae82588 c204959f5acc df774251868c 6945e4707c4d 039067db1d5f f6959012a17f 7da3ce2f52d8 ded1b9f0e8c8 92e3eeffa421
	I1011 19:01:49.417172    1140 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-796400 old-k8s-version-796400: (6.2917196s)
	I1011 19:01:49.417172    1140 network_create.go:108] docker network old-k8s-version-796400 192.168.76.0/24 created
	I1011 19:01:49.417172    1140 kic.go:118] calculated static IP "192.168.76.2" for the "old-k8s-version-796400" container
	I1011 19:01:49.429188    1140 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1011 19:01:49.627666    1140 cli_runner.go:164] Run: docker volume create old-k8s-version-796400 --label name.minikube.sigs.k8s.io=old-k8s-version-796400 --label created_by.minikube.sigs.k8s.io=true
	I1011 19:01:49.825172    1140 oci.go:103] Successfully created a docker volume old-k8s-version-796400
	I1011 19:01:49.830157    1140 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-796400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-796400 --entrypoint /usr/bin/test -v old-k8s-version-796400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1011 19:01:49.307012    9448 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (6.091195s)
	I1011 19:01:49.337488    9448 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1011 19:01:49.369179    9448 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1011 19:01:49.417172    9448 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1011 19:01:49.449183    9448 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1011 19:01:49.485276    9448 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00279ad80}
	I1011 19:01:49.485375    9448 network_create.go:124] attempt to create docker network no-preload-517500 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1011 19:01:49.493584    9448 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-517500 no-preload-517500
	W1011 19:01:49.683382    9448 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-517500 no-preload-517500 returned with exit code 1
	W1011 19:01:49.683462    9448 network_create.go:149] failed to create docker network no-preload-517500 192.168.85.0/24 with gateway 192.168.85.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-517500 no-preload-517500: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1011 19:01:49.683488    9448 network_create.go:116] failed to create docker network no-preload-517500 192.168.85.0/24, will retry: subnet is taken
	I1011 19:01:49.714450    9448 network.go:212] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1011 19:01:49.734862    9448 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00279b590}
	I1011 19:01:49.734862    9448 network_create.go:124] attempt to create docker network no-preload-517500 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1011 19:01:49.740890    9448 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-517500 no-preload-517500
	I1011 19:01:55.444916    9448 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-517500 no-preload-517500: (5.7039999s)
	I1011 19:01:55.444916    9448 network_create.go:108] docker network no-preload-517500 192.168.94.0/24 created
	I1011 19:01:55.444916    9448 kic.go:118] calculated static IP "192.168.94.2" for the "no-preload-517500" container
	I1011 19:01:55.466232    9448 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1011 19:01:55.671344    9448 cli_runner.go:164] Run: docker volume create no-preload-517500 --label name.minikube.sigs.k8s.io=no-preload-517500 --label created_by.minikube.sigs.k8s.io=true
	I1011 19:01:55.916966    9448 oci.go:103] Successfully created a docker volume no-preload-517500
	I1011 19:01:55.924968    9448 cli_runner.go:164] Run: docker run --rm --name no-preload-517500-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-517500 --entrypoint /usr/bin/test -v no-preload-517500:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1011 19:01:57.663053    5476 ssh_runner.go:235] Completed: docker stop 3a68e3e25c04 46a29adb775e 8cbb52bc4624 6ae2fd93692b e3fc1b46e1fe 8dd1f463809a ffd4e4805972 0f568baa0e86 7916b8c1fe0a b6ad1cd53788 2940af478bf4 19c537ee4c38 2723c4f657e7 5e12dae82588 c204959f5acc df774251868c 6945e4707c4d 039067db1d5f f6959012a17f 7da3ce2f52d8 ded1b9f0e8c8 92e3eeffa421: (11.6714234s)
	I1011 19:01:57.678496    5476 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 19:01:55.617318    1408 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-769500:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (27.1777474s)
	I1011 19:01:55.617318    1408 kic.go:200] duration metric: took 27.185808 seconds to extract preloaded images to volume
	I1011 19:01:55.624329    1408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 19:01:56.026983    1408 info.go:266] docker info: {ID:fddc6918-7749-4ebe-a6e7-06311fb56dc1 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:true NGoroutines:80 SystemTime:2023-10-11 19:01:55.9726196 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:
Docker Inc. Version:v0.2.20] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1011 19:01:56.033958    1408 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1011 19:01:56.473138    1408 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-769500 --name force-systemd-env-769500 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-769500 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-769500 --network force-systemd-env-769500 --ip 192.168.67.2 --volume force-systemd-env-769500:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1011 19:01:57.958144    1408 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-769500 --name force-systemd-env-769500 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-769500 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-769500 --network force-systemd-env-769500 --ip 192.168.67.2 --volume force-systemd-env-769500:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae: (1.4849993s)
	I1011 19:01:57.970865    1408 cli_runner.go:164] Run: docker container inspect force-systemd-env-769500 --format={{.State.Running}}
	I1011 19:01:57.234958    1140 cli_runner.go:217] Completed: docker run --rm --name old-k8s-version-796400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-796400 --entrypoint /usr/bin/test -v old-k8s-version-796400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib: (7.4047669s)
	I1011 19:01:57.234958    1140 oci.go:107] Successfully prepared a docker volume old-k8s-version-796400
	I1011 19:01:57.234958    1140 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1011 19:01:57.234958    1140 kic.go:191] Starting extracting preloaded images to volume ...
	I1011 19:01:57.240923    1140 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-796400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1011 19:01:58.145307    9448 cli_runner.go:217] Completed: docker run --rm --name no-preload-517500-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-517500 --entrypoint /usr/bin/test -v no-preload-517500:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib: (2.2193272s)
	I1011 19:01:58.145307    9448 oci.go:107] Successfully prepared a docker volume no-preload-517500
	I1011 19:01:58.145307    9448 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1011 19:01:58.155330    9448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 19:01:58.601383    9448 info.go:266] docker info: {ID:fddc6918-7749-4ebe-a6e7-06311fb56dc1 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:true NGoroutines:87 SystemTime:2023-10-11 19:01:58.5322016 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:
Docker Inc. Version:v0.2.20] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1011 19:01:58.610359    9448 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1011 19:01:59.036168    9448 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-517500 --name no-preload-517500 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-517500 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-517500 --network no-preload-517500 --ip 192.168.94.2 --volume no-preload-517500:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1011 19:02:00.315953    9448 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-517500 --name no-preload-517500 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-517500 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-517500 --network no-preload-517500 --ip 192.168.94.2 --volume no-preload-517500:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae: (1.2797419s)
	I1011 19:02:00.326681    9448 cli_runner.go:164] Run: docker container inspect no-preload-517500 --format={{.State.Running}}
	I1011 19:02:00.560578    9448 cli_runner.go:164] Run: docker container inspect no-preload-517500 --format={{.State.Status}}
	I1011 19:02:00.783689    9448 cli_runner.go:164] Run: docker exec no-preload-517500 stat /var/lib/dpkg/alternatives/iptables
	I1011 19:02:01.179436    9448 oci.go:144] the created container "no-preload-517500" has a running status.
	I1011 19:02:01.179436    9448 kic.go:222] Creating ssh key for kic: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\no-preload-517500\id_rsa...
	I1011 19:01:58.092441    5476 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 19:01:58.176305    5476 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Oct 11 19:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Oct 11 19:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Oct 11 19:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Oct 11 19:00 /etc/kubernetes/scheduler.conf
	
	I1011 19:01:58.191317    5476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 19:01:58.275299    5476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 19:01:58.425341    5476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 19:01:58.457331    5476 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1011 19:01:58.473314    5476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 19:01:58.512305    5476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 19:01:58.538307    5476 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1011 19:01:58.550306    5476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 19:01:58.593354    5476 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 19:01:58.619362    5476 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1011 19:01:58.619362    5476 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 19:01:58.844867    5476 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 19:02:00.133319    5476 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.2884459s)
	I1011 19:02:00.133319    5476 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 19:02:00.570573    5476 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 19:02:00.768689    5476 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 19:02:00.993166    5476 api_server.go:52] waiting for apiserver process to appear ...
	I1011 19:02:01.010109    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:01.162421    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:01.792140    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:02.294577    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:02.805479    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:01:58.200307    1408 cli_runner.go:164] Run: docker container inspect force-systemd-env-769500 --format={{.State.Status}}
	I1011 19:01:58.442311    1408 cli_runner.go:164] Run: docker exec force-systemd-env-769500 stat /var/lib/dpkg/alternatives/iptables
	I1011 19:01:58.807036    1408 oci.go:144] the created container "force-systemd-env-769500" has a running status.
	I1011 19:01:58.807036    1408 kic.go:222] Creating ssh key for kic: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\force-systemd-env-769500\id_rsa...
	I1011 19:01:59.432646    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\force-systemd-env-769500\id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1011 19:01:59.442921    1408 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\force-systemd-env-769500\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1011 19:01:59.711878    1408 cli_runner.go:164] Run: docker container inspect force-systemd-env-769500 --format={{.State.Status}}
	I1011 19:01:59.933686    1408 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1011 19:01:59.934688    1408 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-769500 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1011 19:02:00.258658    1408 kic.go:262] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\force-systemd-env-769500\id_rsa...
	I1011 19:02:01.439339    9448 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\no-preload-517500\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1011 19:02:01.693203    9448 cli_runner.go:164] Run: docker container inspect no-preload-517500 --format={{.State.Status}}
	I1011 19:02:01.958033    9448 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1011 19:02:01.958033    9448 kic_runner.go:114] Args: [docker exec --privileged no-preload-517500 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1011 19:02:02.313534    9448 kic.go:262] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\no-preload-517500\id_rsa...
	I1011 19:02:05.303269    9448 cli_runner.go:164] Run: docker container inspect no-preload-517500 --format={{.State.Status}}
	I1011 19:02:05.500674    9448 machine.go:88] provisioning docker machine ...
	I1011 19:02:05.500674    9448 ubuntu.go:169] provisioning hostname "no-preload-517500"
	I1011 19:02:05.506684    9448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-517500
	I1011 19:02:05.694505    9448 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:05.703505    9448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52842 <nil> <nil>}
	I1011 19:02:05.704506    9448 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-517500 && echo "no-preload-517500" | sudo tee /etc/hostname
	I1011 19:02:05.941504    9448 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-517500
	
	I1011 19:02:05.947487    9448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-517500
	I1011 19:02:06.139106    9448 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:06.140111    9448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52842 <nil> <nil>}
	I1011 19:02:06.140111    9448 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-517500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-517500/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-517500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 19:02:03.297188    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:03.798871    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:04.290450    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:04.565797    5476 api_server.go:72] duration metric: took 3.5726139s to wait for apiserver process to appear ...
	I1011 19:02:04.565797    5476 api_server.go:88] waiting for apiserver healthz status ...
	I1011 19:02:04.565797    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:04.570848    5476 api_server.go:269] stopped: https://127.0.0.1:52534/healthz: Get "https://127.0.0.1:52534/healthz": EOF
	I1011 19:02:04.570848    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:04.574810    5476 api_server.go:269] stopped: https://127.0.0.1:52534/healthz: Get "https://127.0.0.1:52534/healthz": EOF
	I1011 19:02:05.090000    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:03.557738    1408 cli_runner.go:164] Run: docker container inspect force-systemd-env-769500 --format={{.State.Status}}
	I1011 19:02:03.741504    1408 machine.go:88] provisioning docker machine ...
	I1011 19:02:03.741562    1408 ubuntu.go:169] provisioning hostname "force-systemd-env-769500"
	I1011 19:02:03.749615    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-769500
	I1011 19:02:03.956819    1408 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:03.972078    1408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52837 <nil> <nil>}
	I1011 19:02:03.972078    1408 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-769500 && echo "force-systemd-env-769500" | sudo tee /etc/hostname
	I1011 19:02:04.200199    1408 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-769500
	
	I1011 19:02:04.212463    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-769500
	I1011 19:02:04.437484    1408 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:04.438448    1408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52837 <nil> <nil>}
	I1011 19:02:04.438448    1408 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-769500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-769500/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-769500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 19:02:04.645764    1408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 19:02:04.645764    1408 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube2\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube2\minikube-integration\.minikube}
	I1011 19:02:04.645764    1408 ubuntu.go:177] setting up certificates
	I1011 19:02:04.645764    1408 provision.go:83] configureAuth start
	I1011 19:02:04.657680    1408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-769500
	I1011 19:02:04.855615    1408 provision.go:138] copyHostCerts
	I1011 19:02:04.855678    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem
	I1011 19:02:04.855678    1408 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem, removing ...
	I1011 19:02:04.855678    1408 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\key.pem
	I1011 19:02:04.856858    1408 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem (1675 bytes)
	I1011 19:02:04.858329    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem
	I1011 19:02:04.858707    1408 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem, removing ...
	I1011 19:02:04.858802    1408 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.pem
	I1011 19:02:04.859357    1408 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1011 19:02:04.860753    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem
	I1011 19:02:04.861306    1408 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem, removing ...
	I1011 19:02:04.861306    1408 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cert.pem
	I1011 19:02:04.861814    1408 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1011 19:02:04.863602    1408 provision.go:112] generating server cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.force-systemd-env-769500 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube force-systemd-env-769500]
	I1011 19:02:04.988779    1408 provision.go:172] copyRemoteCerts
	I1011 19:02:05.002492    1408 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 19:02:05.012347    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-769500
	I1011 19:02:05.198306    1408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52837 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\force-systemd-env-769500\id_rsa Username:docker}
	I1011 19:02:05.337599    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1011 19:02:05.337599    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 19:02:05.407652    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1011 19:02:05.407652    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1245 bytes)
	I1011 19:02:05.459684    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1011 19:02:05.459684    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 19:02:05.518671    1408 provision.go:86] duration metric: configureAuth took 872.9037ms
	I1011 19:02:05.518671    1408 ubuntu.go:193] setting minikube options for container-runtime
	I1011 19:02:05.518671    1408 config.go:182] Loaded profile config "force-systemd-env-769500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1011 19:02:05.524677    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-769500
	I1011 19:02:05.711506    1408 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:05.711506    1408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52837 <nil> <nil>}
	I1011 19:02:05.711506    1408 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1011 19:02:05.912893    1408 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1011 19:02:05.912893    1408 ubuntu.go:71] root file system type: overlay
	I1011 19:02:05.913432    1408 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1011 19:02:05.920485    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-769500
	I1011 19:02:06.121108    1408 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:06.122103    1408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52837 <nil> <nil>}
	I1011 19:02:06.122103    1408 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1011 19:02:06.349458    1408 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1011 19:02:06.357469    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-769500
	I1011 19:02:06.547733    1408 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:06.547733    1408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52837 <nil> <nil>}
	I1011 19:02:06.547733    1408 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1011 19:02:06.329965    9448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 19:02:06.329965    9448 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube2\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube2\minikube-integration\.minikube}
	I1011 19:02:06.329965    9448 ubuntu.go:177] setting up certificates
	I1011 19:02:06.329965    9448 provision.go:83] configureAuth start
	I1011 19:02:06.336445    9448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-517500
	I1011 19:02:06.528982    9448 provision.go:138] copyHostCerts
	I1011 19:02:06.529475    9448 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem, removing ...
	I1011 19:02:06.529527    9448 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.pem
	I1011 19:02:06.529773    9448 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1011 19:02:06.531481    9448 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem, removing ...
	I1011 19:02:06.531565    9448 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cert.pem
	I1011 19:02:06.532008    9448 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1011 19:02:06.533503    9448 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem, removing ...
	I1011 19:02:06.533503    9448 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\key.pem
	I1011 19:02:06.533804    9448 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem (1675 bytes)
	I1011 19:02:06.534867    9448 provision.go:112] generating server cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.no-preload-517500 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-517500]
	I1011 19:02:06.749859    9448 provision.go:172] copyRemoteCerts
	I1011 19:02:06.766479    9448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 19:02:06.774601    9448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-517500
	I1011 19:02:06.966369    9448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52842 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\no-preload-517500\id_rsa Username:docker}
	I1011 19:02:07.123826    9448 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 19:02:07.181134    9448 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I1011 19:02:07.231309    9448 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 19:02:07.287250    9448 provision.go:86] duration metric: configureAuth took 957.2808ms
	I1011 19:02:07.287343    9448 ubuntu.go:193] setting minikube options for container-runtime
	I1011 19:02:07.287996    9448 config.go:182] Loaded profile config "no-preload-517500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1011 19:02:07.297026    9448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-517500
	I1011 19:02:07.495274    9448 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:07.496720    9448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52842 <nil> <nil>}
	I1011 19:02:07.496720    9448 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1011 19:02:07.695572    9448 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1011 19:02:07.695572    9448 ubuntu.go:71] root file system type: overlay
	I1011 19:02:07.696580    9448 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1011 19:02:07.709561    9448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-517500
	I1011 19:02:07.897756    9448 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:07.898799    9448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52842 <nil> <nil>}
	I1011 19:02:07.898799    9448 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1011 19:02:08.130674    9448 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1011 19:02:08.136507    9448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-517500
	I1011 19:02:08.350233    9448 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:08.351472    9448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52842 <nil> <nil>}
	I1011 19:02:08.351472    9448 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1011 19:02:09.858527    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 19:02:09.859067    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 19:02:09.859067    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:10.154272    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:10.154272    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:10.154272    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:10.169841    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:10.169841    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:10.585851    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:10.598549    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:10.598549    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:11.075408    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:11.446265    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:11.446265    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:11.580523    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:11.595954    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:11.595954    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:12.086403    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:12.119892    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:12.119892    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:12.588448    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:12.664492    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:12.664492    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:12.393309    1408 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-09-04 12:30:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-11 19:02:06.336215000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1011 19:02:12.393413    1408 machine.go:91] provisioned docker machine in 8.6518111s
	I1011 19:02:12.393413    1408 client.go:171] LocalClient.Create took 47.7096332s
	I1011 19:02:12.393486    1408 start.go:167] duration metric: libmachine.API.Create for "force-systemd-env-769500" took 47.7097058s
	I1011 19:02:12.393556    1408 start.go:300] post-start starting for "force-systemd-env-769500" (driver="docker")
	I1011 19:02:12.393556    1408 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 19:02:12.408935    1408 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 19:02:12.415924    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-769500
	I1011 19:02:12.604426    1408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52837 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\force-systemd-env-769500\id_rsa Username:docker}
	I1011 19:02:12.755561    1408 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 19:02:12.768566    1408 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1011 19:02:12.768566    1408 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1011 19:02:12.768566    1408 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1011 19:02:12.768566    1408 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1011 19:02:12.768566    1408 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\addons for local assets ...
	I1011 19:02:12.769567    1408 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\files for local assets ...
	I1011 19:02:12.770613    1408 filesync.go:149] local asset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\15562.pem -> 15562.pem in /etc/ssl/certs
	I1011 19:02:12.770613    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\15562.pem -> /etc/ssl/certs/15562.pem
	I1011 19:02:12.786545    1408 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 19:02:12.809747    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\15562.pem --> /etc/ssl/certs/15562.pem (1708 bytes)
	I1011 19:02:12.861908    1408 start.go:303] post-start completed in 468.3504ms
	I1011 19:02:12.877896    1408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-769500
	I1011 19:02:13.077337    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:13.164113    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:13.164113    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:13.583348    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:13.664216    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:13.664216    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:14.088164    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:14.153505    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:14.153505    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:14.589228    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:14.601220    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 200:
	ok
	I1011 19:02:14.619236    5476 api_server.go:141] control plane version: v1.28.2
	I1011 19:02:14.619236    5476 api_server.go:131] duration metric: took 10.0533926s to wait for apiserver health ...
	I1011 19:02:14.619236    5476 cni.go:84] Creating CNI manager for ""
	I1011 19:02:14.619236    5476 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 19:02:14.622242    5476 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 19:02:12.493431    9448 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-09-04 12:30:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-11 19:02:08.116215000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1011 19:02:12.493431    9448 machine.go:91] provisioned docker machine in 6.9927243s
	I1011 19:02:12.493431    9448 client.go:171] LocalClient.Create took 29.6378708s
	I1011 19:02:12.493431    9448 start.go:167] duration metric: libmachine.API.Create for "no-preload-517500" took 29.6388754s
	I1011 19:02:12.493431    9448 start.go:300] post-start starting for "no-preload-517500" (driver="docker")
	I1011 19:02:12.493431    9448 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 19:02:12.508441    9448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 19:02:12.516434    9448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-517500
	I1011 19:02:12.698424    9448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52842 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\no-preload-517500\id_rsa Username:docker}
	I1011 19:02:12.848744    9448 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 19:02:12.861081    9448 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1011 19:02:12.861383    9448 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1011 19:02:12.861383    9448 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1011 19:02:12.861383    9448 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1011 19:02:12.861383    9448 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\addons for local assets ...
	I1011 19:02:12.861908    9448 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\files for local assets ...
	I1011 19:02:12.862884    9448 filesync.go:149] local asset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\15562.pem -> 15562.pem in /etc/ssl/certs
	I1011 19:02:12.879894    9448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 19:02:12.901872    9448 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\15562.pem --> /etc/ssl/certs/15562.pem (1708 bytes)
	I1011 19:02:12.959018    9448 start.go:303] post-start completed in 465.5855ms
	I1011 19:02:12.974995    9448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-517500
	I1011 19:02:13.171132    9448 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-517500\config.json ...
	I1011 19:02:13.186096    9448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 19:02:13.193108    9448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-517500
	I1011 19:02:13.378586    9448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52842 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\no-preload-517500\id_rsa Username:docker}
	I1011 19:02:13.525338    9448 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1011 19:02:13.536340    9448 start.go:128] duration metric: createHost completed in 30.6917691s
	I1011 19:02:13.536340    9448 start.go:83] releasing machines lock for "no-preload-517500", held for 30.6917691s
	I1011 19:02:13.542335    9448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-517500
	I1011 19:02:13.743208    9448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 19:02:13.749221    9448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-517500
	I1011 19:02:13.752232    9448 ssh_runner.go:195] Run: cat /version.json
	I1011 19:02:13.764231    9448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-517500
	I1011 19:02:13.946513    9448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52842 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\no-preload-517500\id_rsa Username:docker}
	I1011 19:02:13.961148    9448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52842 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\no-preload-517500\id_rsa Username:docker}
	I1011 19:02:14.471719    9448 ssh_runner.go:195] Run: systemctl --version
	I1011 19:02:14.502233    9448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1011 19:02:14.528221    9448 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W1011 19:02:14.549242    9448 start.go:416] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I1011 19:02:14.561221    9448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 19:02:14.641236    9448 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 19:02:14.641236    9448 start.go:472] detecting cgroup driver to use...
	I1011 19:02:14.641236    9448 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1011 19:02:14.641236    9448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 19:02:14.691220    9448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1011 19:02:14.725217    9448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1011 19:02:14.748226    9448 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1011 19:02:14.762665    9448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1011 19:02:14.810957    9448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1011 19:02:14.843554    9448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1011 19:02:14.881201    9448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1011 19:02:14.918191    9448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 19:02:14.947211    9448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1011 19:02:14.987676    9448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 19:02:15.019662    9448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 19:02:15.050684    9448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 19:02:15.314353    9448 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1011 19:02:15.525031    9448 start.go:472] detecting cgroup driver to use...
	I1011 19:02:15.525031    9448 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1011 19:02:15.534018    9448 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1011 19:02:15.562019    9448 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1011 19:02:15.572065    9448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1011 19:02:15.599017    9448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 19:02:15.698229    9448 ssh_runner.go:195] Run: which cri-dockerd
	I1011 19:02:15.727094    9448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1011 19:02:15.756123    9448 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1011 19:02:15.815360    9448 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1011 19:02:16.020625    9448 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1011 19:02:16.194824    9448 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1011 19:02:16.194824    9448 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1011 19:02:16.245886    9448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 19:02:14.633210    5476 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 19:02:14.655235    5476 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1011 19:02:14.699224    5476 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 19:02:14.714223    5476 system_pods.go:59] 6 kube-system pods found
	I1011 19:02:14.714223    5476 system_pods.go:61] "coredns-5dd5756b68-g2h9s" [6626c9fe-763e-46b0-a66a-5bd39e157d8d] Running
	I1011 19:02:14.714223    5476 system_pods.go:61] "etcd-pause-375900" [4bf74444-af82-451c-b1d3-36e322aebe0b] Running
	I1011 19:02:14.714223    5476 system_pods.go:61] "kube-apiserver-pause-375900" [4b6ca595-2579-4609-972b-3d352dbc9971] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1011 19:02:14.714223    5476 system_pods.go:61] "kube-controller-manager-pause-375900" [b6f72b4d-30c2-4679-9634-612b1e81dc5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1011 19:02:14.714223    5476 system_pods.go:61] "kube-proxy-6wv6x" [86829575-b97b-4960-a459-934aecb00dd5] Running
	I1011 19:02:14.714223    5476 system_pods.go:61] "kube-scheduler-pause-375900" [e9b9a50f-2ab2-414b-a8a4-51708cdfb4d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1011 19:02:14.714223    5476 system_pods.go:74] duration metric: took 14.9987ms to wait for pod list to return data ...
	I1011 19:02:14.714223    5476 node_conditions.go:102] verifying NodePressure condition ...
	I1011 19:02:14.748226    5476 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I1011 19:02:14.748226    5476 node_conditions.go:123] node cpu capacity is 16
	I1011 19:02:14.748226    5476 node_conditions.go:105] duration metric: took 34.0027ms to run NodePressure ...
	I1011 19:02:14.748226    5476 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 19:02:15.356351    5476 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1011 19:02:15.367367    5476 kubeadm.go:787] kubelet initialised
	I1011 19:02:15.367367    5476 kubeadm.go:788] duration metric: took 11.0154ms waiting for restarted kubelet to initialise ...
	I1011 19:02:15.367367    5476 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 19:02:15.379351    5476 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-g2h9s" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:15.395368    5476 pod_ready.go:92] pod "coredns-5dd5756b68-g2h9s" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:15.395368    5476 pod_ready.go:81] duration metric: took 16.0168ms waiting for pod "coredns-5dd5756b68-g2h9s" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:15.395368    5476 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:16.763545    5476 pod_ready.go:92] pod "etcd-pause-375900" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:16.763615    5476 pod_ready.go:81] duration metric: took 1.368241s waiting for pod "etcd-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:16.763615    5476 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:16.778930    5476 pod_ready.go:92] pod "kube-apiserver-pause-375900" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:16.778930    5476 pod_ready.go:81] duration metric: took 15.2525ms waiting for pod "kube-apiserver-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:16.778930    5476 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:13.077790    1408 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\config.json ...
	I1011 19:02:13.103710    1408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 19:02:13.109487    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-769500
	I1011 19:02:13.298717    1408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52837 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\force-systemd-env-769500\id_rsa Username:docker}
	I1011 19:02:13.430222    1408 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1011 19:02:13.440812    1408 start.go:128] duration metric: createHost completed in 48.7620324s
	I1011 19:02:13.440812    1408 start.go:83] releasing machines lock for "force-systemd-env-769500", held for 48.7630068s
	I1011 19:02:13.449225    1408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-769500
	I1011 19:02:13.633199    1408 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 19:02:13.645400    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-769500
	I1011 19:02:13.650613    1408 ssh_runner.go:195] Run: cat /version.json
	I1011 19:02:13.665226    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-769500
	I1011 19:02:13.854232    1408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52837 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\force-systemd-env-769500\id_rsa Username:docker}
	I1011 19:02:13.882211    1408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52837 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\force-systemd-env-769500\id_rsa Username:docker}
	I1011 19:02:14.198512    1408 ssh_runner.go:195] Run: systemctl --version
	I1011 19:02:14.218498    1408 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1011 19:02:14.238509    1408 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W1011 19:02:14.263514    1408 start.go:416] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I1011 19:02:14.273493    1408 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 19:02:14.343499    1408 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 19:02:14.343499    1408 start.go:472] detecting cgroup driver to use...
	I1011 19:02:14.343499    1408 start.go:476] using "systemd" cgroup driver as enforced via flags
	I1011 19:02:14.343499    1408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 19:02:14.406896    1408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1011 19:02:14.442043    1408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1011 19:02:14.475547    1408 containerd.go:145] configuring containerd to use "systemd" as cgroup driver...
	I1011 19:02:14.492761    1408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1011 19:02:14.530250    1408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1011 19:02:14.563215    1408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1011 19:02:14.605219    1408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1011 19:02:14.645245    1408 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 19:02:14.681216    1408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1011 19:02:14.717223    1408 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 19:02:14.751220    1408 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 19:02:14.798124    1408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 19:02:14.971208    1408 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1011 19:02:15.164717    1408 start.go:472] detecting cgroup driver to use...
	I1011 19:02:15.164717    1408 start.go:476] using "systemd" cgroup driver as enforced via flags
	I1011 19:02:15.176716    1408 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1011 19:02:15.209682    1408 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1011 19:02:15.221687    1408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1011 19:02:15.356351    1408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 19:02:15.463266    1408 ssh_runner.go:195] Run: which cri-dockerd
	I1011 19:02:15.484252    1408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1011 19:02:15.511759    1408 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1011 19:02:15.563029    1408 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1011 19:02:15.768366    1408 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1011 19:02:15.914983    1408 docker.go:555] configuring docker to use "systemd" as cgroup driver...
	I1011 19:02:15.914983    1408 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1011 19:02:15.971601    1408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 19:02:16.138182    1408 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1011 19:02:17.838191    1408 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.700001s)
	I1011 19:02:17.847190    1408 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1011 19:02:18.019858    1408 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1011 19:02:18.203364    1408 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1011 19:02:18.407702    1408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 19:02:18.529279    1408 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1011 19:02:18.629649    1408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 19:02:18.829632    1408 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1011 19:02:18.997377    1408 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1011 19:02:19.011416    1408 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1011 19:02:19.022407    1408 start.go:540] Will wait 60s for crictl version
	I1011 19:02:19.033405    1408 ssh_runner.go:195] Run: which crictl
	I1011 19:02:19.057421    1408 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 19:02:19.177113    1408 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1011 19:02:19.187100    1408 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1011 19:02:19.260275    1408 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1011 19:02:16.407929    9448 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1011 19:02:17.937731    9448 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.3708348s)
	I1011 19:02:17.953724    9448 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1011 19:02:18.127752    9448 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1011 19:02:18.341424    9448 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1011 19:02:18.519233    9448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 19:02:18.758375    9448 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1011 19:02:18.824612    9448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 19:02:19.028405    9448 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1011 19:02:19.214094    9448 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1011 19:02:19.227123    9448 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1011 19:02:19.245091    9448 start.go:540] Will wait 60s for crictl version
	I1011 19:02:19.262097    9448 ssh_runner.go:195] Run: which crictl
	I1011 19:02:19.290092    9448 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 19:02:19.472022    9448 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1011 19:02:19.480031    9448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1011 19:02:19.557572    9448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1011 19:02:16.943578    1140 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-796400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (19.7023803s)
	I1011 19:02:16.943655    1140 kic.go:200] duration metric: took 19.708607 seconds to extract preloaded images to volume
	I1011 19:02:16.951389    1140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 19:02:17.354443    1140 info.go:266] docker info: {ID:fddc6918-7749-4ebe-a6e7-06311fb56dc1 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:84 OomKillDisable:true NGoroutines:80 SystemTime:2023-10-11 19:02:17.2906278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:
Docker Inc. Version:v0.2.20] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1011 19:02:17.363806    1140 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1011 19:02:17.754374    1140 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-796400 --name old-k8s-version-796400 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-796400 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-796400 --network old-k8s-version-796400 --ip 192.168.76.2 --volume old-k8s-version-796400:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1011 19:02:18.723373    1140 cli_runner.go:164] Run: docker container inspect old-k8s-version-796400 --format={{.State.Running}}
	I1011 19:02:18.916831    1140 cli_runner.go:164] Run: docker container inspect old-k8s-version-796400 --format={{.State.Status}}
	I1011 19:02:19.134099    1140 cli_runner.go:164] Run: docker exec old-k8s-version-796400 stat /var/lib/dpkg/alternatives/iptables
	I1011 19:02:19.487031    1140 oci.go:144] the created container "old-k8s-version-796400" has a running status.
	I1011 19:02:19.487031    1140 kic.go:222] Creating ssh key for kic: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\old-k8s-version-796400\id_rsa...
	I1011 19:02:19.702811    1140 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\old-k8s-version-796400\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1011 19:02:19.948809    1140 cli_runner.go:164] Run: docker container inspect old-k8s-version-796400 --format={{.State.Status}}
	I1011 19:02:20.202629    1140 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1011 19:02:20.202629    1140 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-796400 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1011 19:02:20.523631    1140 kic.go:262] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\old-k8s-version-796400\id_rsa...
	I1011 19:02:19.691809    9448 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1011 19:02:19.699804    9448 cli_runner.go:164] Run: docker exec -t no-preload-517500 dig +short host.docker.internal
	I1011 19:02:20.096828    9448 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1011 19:02:20.109846    9448 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1011 19:02:20.119846    9448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 19:02:20.160499    9448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-517500
	I1011 19:02:20.363649    9448 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1011 19:02:20.373644    9448 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1011 19:02:20.420653    9448 docker.go:689] Got preloaded images: 
	I1011 19:02:20.420653    9448 docker.go:695] registry.k8s.io/kube-apiserver:v1.28.2 wasn't preloaded
	I1011 19:02:20.420653    9448 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.2 registry.k8s.io/kube-controller-manager:v1.28.2 registry.k8s.io/kube-scheduler:v1.28.2 registry.k8s.io/kube-proxy:v1.28.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1011 19:02:20.431643    9448 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 19:02:20.434642    9448 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1011 19:02:20.440641    9448 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1011 19:02:20.442649    9448 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1011 19:02:20.444650    9448 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 19:02:20.446645    9448 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1011 19:02:20.446645    9448 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.2
	I1011 19:02:20.446645    9448 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.2
	I1011 19:02:20.447640    9448 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.2
	I1011 19:02:20.452660    9448 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1011 19:02:20.453639    9448 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1011 19:02:20.457653    9448 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1011 19:02:20.458646    9448 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.2
	I1011 19:02:20.460651    9448 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.2
	I1011 19:02:20.460651    9448 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1011 19:02:20.467640    9448 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.2
	W1011 19:02:20.552028    9448 image.go:187] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1011 19:02:20.646946    9448 image.go:187] authn lookup for registry.k8s.io/pause:3.9 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1011 19:02:20.753950    9448 image.go:187] authn lookup for registry.k8s.io/etcd:3.5.9-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1011 19:02:20.857934    9448 image.go:187] authn lookup for registry.k8s.io/coredns/coredns:v1.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1011 19:02:20.867321    9448 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 19:02:20.919783    9448 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1011 19:02:20.919864    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1011 19:02:20.919957    9448 docker.go:318] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 19:02:20.929001    9448 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	W1011 19:02:20.966803    9448 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.28.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1011 19:02:20.999799    9448 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1011 19:02:21.014773    9448 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1011 19:02:21.025812    9448 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1011 19:02:21.025812    9448 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1011 19:02:21.067782    9448 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	W1011 19:02:21.080810    9448 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.28.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1011 19:02:21.123815    9448 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1011 19:02:21.166853    9448 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.2
	I1011 19:02:21.172772    9448 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	W1011 19:02:21.207810    9448 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.28.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1011 19:02:21.295024    9448 cache_images.go:116] "registry.k8s.io/pause:3.9" needs transfer: "registry.k8s.io/pause:3.9" does not exist at hash "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" in container runtime
	I1011 19:02:21.295024    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.9 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.9
	I1011 19:02:21.295024    9448 docker.go:318] Removing image: registry.k8s.io/pause:3.9
	I1011 19:02:18.873656    5476 pod_ready.go:102] pod "kube-controller-manager-pause-375900" in "kube-system" namespace has status "Ready":"False"
	I1011 19:02:21.371026    5476 pod_ready.go:92] pod "kube-controller-manager-pause-375900" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:21.371026    5476 pod_ready.go:81] duration metric: took 4.5920747s waiting for pod "kube-controller-manager-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.371026    5476 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6wv6x" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.391043    5476 pod_ready.go:92] pod "kube-proxy-6wv6x" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:21.391043    5476 pod_ready.go:81] duration metric: took 20.0173ms waiting for pod "kube-proxy-6wv6x" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.391043    5476 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.412018    5476 pod_ready.go:92] pod "kube-scheduler-pause-375900" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:21.412018    5476 pod_ready.go:81] duration metric: took 20.9746ms waiting for pod "kube-scheduler-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.412018    5476 pod_ready.go:38] duration metric: took 6.0446238s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 19:02:21.412018    5476 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 19:02:21.432014    5476 ops.go:34] apiserver oom_adj: -16
	I1011 19:02:21.432014    5476 kubeadm.go:640] restartCluster took 50.1559392s
	I1011 19:02:21.432014    5476 kubeadm.go:406] StartCluster complete in 50.7571735s
	I1011 19:02:21.432014    5476 settings.go:142] acquiring lock: {Name:mk9684611c6005d251a6ecf406b4611c2c1e30f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:02:21.432014    5476 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1011 19:02:21.433012    5476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\kubeconfig: {Name:mk7e72b8b9c82f9d87d6aed6af6962a1c1fa489d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:02:21.434012    5476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1011 19:02:21.434012    5476 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1011 19:02:21.440018    5476 out.go:177] * Enabled addons: 
	I1011 19:02:21.435016    5476 config.go:182] Loaded profile config "pause-375900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1011 19:02:21.444015    5476 addons.go:502] enable addons completed in 10.0033ms: enabled=[]
	I1011 19:02:21.448019    5476 kapi.go:59] client config for pause-375900: &rest.Config{Host:"https://127.0.0.1:52534", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\pause-375900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\pause-375900\\client.key", CAFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e44dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1011 19:02:21.457026    5476 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-375900" context rescaled to 1 replicas
	I1011 19:02:21.457026    5476 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 19:02:21.462035    5476 out.go:177] * Verifying Kubernetes components...
	I1011 19:02:21.479022    5476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 19:02:21.616332    5476 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1011 19:02:21.627333    5476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-375900
	I1011 19:02:21.839597    5476 node_ready.go:35] waiting up to 6m0s for node "pause-375900" to be "Ready" ...
	I1011 19:02:21.850416    5476 node_ready.go:49] node "pause-375900" has status "Ready":"True"
	I1011 19:02:21.850509    5476 node_ready.go:38] duration metric: took 10.7886ms waiting for node "pause-375900" to be "Ready" ...
	I1011 19:02:21.850509    5476 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 19:02:21.864611    5476 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-g2h9s" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.885624    5476 pod_ready.go:92] pod "coredns-5dd5756b68-g2h9s" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:21.885624    5476 pod_ready.go:81] duration metric: took 21.0126ms waiting for pod "coredns-5dd5756b68-g2h9s" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.885624    5476 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.898610    5476 pod_ready.go:92] pod "etcd-pause-375900" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:21.898610    5476 pod_ready.go:81] duration metric: took 12.986ms waiting for pod "etcd-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.898610    5476 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:22.179285    5476 pod_ready.go:92] pod "kube-apiserver-pause-375900" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:22.179285    5476 pod_ready.go:81] duration metric: took 280.6738ms waiting for pod "kube-apiserver-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:22.179285    5476 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:22.565413    5476 pod_ready.go:92] pod "kube-controller-manager-pause-375900" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:22.566410    5476 pod_ready.go:81] duration metric: took 387.1239ms waiting for pod "kube-controller-manager-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:22.566410    5476 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6wv6x" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:22.977552    5476 pod_ready.go:92] pod "kube-proxy-6wv6x" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:22.977552    5476 pod_ready.go:81] duration metric: took 411.1397ms waiting for pod "kube-proxy-6wv6x" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:22.977552    5476 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:19.324121    1408 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1011 19:02:19.333093    1408 cli_runner.go:164] Run: docker exec -t force-systemd-env-769500 dig +short host.docker.internal
	I1011 19:02:19.706802    1408 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1011 19:02:19.719798    1408 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1011 19:02:19.730811    1408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 19:02:19.761836    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" force-systemd-env-769500
	I1011 19:02:19.973803    1408 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1011 19:02:19.979857    1408 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1011 19:02:20.059255    1408 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1011 19:02:20.059348    1408 docker.go:619] Images already preloaded, skipping extraction
	I1011 19:02:20.072832    1408 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1011 19:02:20.114837    1408 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1011 19:02:20.114837    1408 cache_images.go:84] Images are preloaded, skipping loading
	I1011 19:02:20.123846    1408 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1011 19:02:20.245645    1408 cni.go:84] Creating CNI manager for ""
	I1011 19:02:20.245645    1408 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 19:02:20.246635    1408 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1011 19:02:20.246635    1408 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-769500 NodeName:force-systemd-env-769500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 19:02:20.246635    1408 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-env-769500"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 19:02:20.246635    1408 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=force-systemd-env-769500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:force-systemd-env-769500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1011 19:02:20.261644    1408 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1011 19:02:20.287645    1408 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 19:02:20.298639    1408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 19:02:20.322644    1408 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I1011 19:02:20.366632    1408 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 19:02:20.411638    1408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1011 19:02:20.515633    1408 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1011 19:02:20.525633    1408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 19:02:20.549889    1408 certs.go:56] Setting up C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500 for IP: 192.168.67.2
	I1011 19:02:20.549889    1408 certs.go:190] acquiring lock for shared ca certs: {Name:mka39b35711ce17aa627001b408a7adb2f266bbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:02:20.550970    1408 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key
	I1011 19:02:20.551718    1408 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key
	I1011 19:02:20.552601    1408 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\client.key
	I1011 19:02:20.552816    1408 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\client.crt with IP's: []
	I1011 19:02:20.739871    1408 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\client.crt ...
	I1011 19:02:20.739871    1408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\client.crt: {Name:mk7e160493cd718464216202185387ebafe0343a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:02:20.740844    1408 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\client.key ...
	I1011 19:02:20.740844    1408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\client.key: {Name:mk14484344be0356993d971268ab9d92dc8f8bf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:02:20.741860    1408 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.key.c7fa3a9e
	I1011 19:02:20.741860    1408 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1011 19:02:20.878316    1408 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.crt.c7fa3a9e ...
	I1011 19:02:20.878316    1408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.crt.c7fa3a9e: {Name:mkf56219099746704f9edb9f435708c2c5620049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:02:20.880312    1408 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.key.c7fa3a9e ...
	I1011 19:02:20.880312    1408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.key.c7fa3a9e: {Name:mkdc13a86b7046d42aa4b045c535f8512dba25dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:02:20.881325    1408 certs.go:337] copying C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.crt.c7fa3a9e -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.crt
	I1011 19:02:20.891318    1408 certs.go:341] copying C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.key.c7fa3a9e -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.key
	I1011 19:02:20.893317    1408 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\proxy-client.key
	I1011 19:02:20.893317    1408 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\proxy-client.crt with IP's: []
	I1011 19:02:20.989783    1408 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\proxy-client.crt ...
	I1011 19:02:20.989783    1408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\proxy-client.crt: {Name:mkeb355b0f0485fea8521ac40fda9fa4bcefbb0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:02:20.991793    1408 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\proxy-client.key ...
	I1011 19:02:20.991793    1408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\proxy-client.key: {Name:mk973b1dedc5375b99f3f30719f8e07f18894466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:02:20.992788    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1011 19:02:20.992788    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1011 19:02:20.992788    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1011 19:02:21.004774    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1011 19:02:21.005779    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1011 19:02:21.005779    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1011 19:02:21.006822    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1011 19:02:21.006822    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1011 19:02:21.006822    1408 certs.go:437] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\1556.pem (1338 bytes)
	W1011 19:02:21.007777    1408 certs.go:433] ignoring C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\1556_empty.pem, impossibly tiny 0 bytes
	I1011 19:02:21.007777    1408 certs.go:437] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1011 19:02:21.007777    1408 certs.go:437] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1011 19:02:21.007777    1408 certs.go:437] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1011 19:02:21.008781    1408 certs.go:437] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1011 19:02:21.008781    1408 certs.go:437] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\15562.pem (1708 bytes)
	I1011 19:02:21.008781    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1011 19:02:21.008781    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\1556.pem -> /usr/share/ca-certificates/1556.pem
	I1011 19:02:21.009777    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\15562.pem -> /usr/share/ca-certificates/15562.pem
	I1011 19:02:21.010770    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1011 19:02:21.086785    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 19:02:21.170823    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 19:02:21.249036    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1011 19:02:21.330027    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 19:02:21.415043    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 19:02:21.479022    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 19:02:21.548122    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 19:02:21.612371    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 19:02:21.676316    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\1556.pem --> /usr/share/ca-certificates/1556.pem (1338 bytes)
	I1011 19:02:21.730308    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\15562.pem --> /usr/share/ca-certificates/15562.pem (1708 bytes)
	I1011 19:02:21.801344    1408 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 19:02:21.870639    1408 ssh_runner.go:195] Run: openssl version
	I1011 19:02:21.899610    1408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15562.pem && ln -fs /usr/share/ca-certificates/15562.pem /etc/ssl/certs/15562.pem"
	I1011 19:02:21.930604    1408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15562.pem
	I1011 19:02:21.946415    1408 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 11 18:04 /usr/share/ca-certificates/15562.pem
	I1011 19:02:21.964235    1408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15562.pem
	I1011 19:02:21.995242    1408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15562.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 19:02:22.037729    1408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 19:02:22.080633    1408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 19:02:22.091848    1408 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 11 17:53 /usr/share/ca-certificates/minikubeCA.pem
	I1011 19:02:22.104222    1408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 19:02:22.126224    1408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 19:02:22.166477    1408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1556.pem && ln -fs /usr/share/ca-certificates/1556.pem /etc/ssl/certs/1556.pem"
	I1011 19:02:22.212099    1408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1556.pem
	I1011 19:02:22.225233    1408 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 11 18:04 /usr/share/ca-certificates/1556.pem
	I1011 19:02:22.239222    1408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1556.pem
	I1011 19:02:22.274635    1408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1556.pem /etc/ssl/certs/51391683.0"
	I1011 19:02:22.318999    1408 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1011 19:02:22.332403    1408 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1011 19:02:22.332830    1408 kubeadm.go:404] StartCluster: {Name:force-systemd-env-769500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-env-769500 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1011 19:02:22.341515    1408 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1011 19:02:22.406910    1408 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 19:02:22.444799    1408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 19:02:22.466211    1408 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1011 19:02:22.475397    1408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 19:02:22.498852    1408 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 19:02:22.499001    1408 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1011 19:02:22.713218    1408 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1011 19:02:22.896720    1408 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 19:02:23.368906    5476 pod_ready.go:92] pod "kube-scheduler-pause-375900" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:23.369009    5476 pod_ready.go:81] duration metric: took 391.4553ms waiting for pod "kube-scheduler-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:23.369009    5476 pod_ready.go:38] duration metric: took 1.5184932s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 19:02:23.369072    5476 api_server.go:52] waiting for apiserver process to appear ...
	I1011 19:02:23.381249    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:23.423242    5476 api_server.go:72] duration metric: took 1.9662072s to wait for apiserver process to appear ...
	I1011 19:02:23.423242    5476 api_server.go:88] waiting for apiserver healthz status ...
	I1011 19:02:23.423242    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:23.438236    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 200:
	ok
	I1011 19:02:23.444880    5476 api_server.go:141] control plane version: v1.28.2
	I1011 19:02:23.445145    5476 api_server.go:131] duration metric: took 21.9033ms to wait for apiserver health ...
	I1011 19:02:23.445251    5476 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 19:02:23.580252    5476 system_pods.go:59] 6 kube-system pods found
	I1011 19:02:23.580252    5476 system_pods.go:61] "coredns-5dd5756b68-g2h9s" [6626c9fe-763e-46b0-a66a-5bd39e157d8d] Running
	I1011 19:02:23.580252    5476 system_pods.go:61] "etcd-pause-375900" [4bf74444-af82-451c-b1d3-36e322aebe0b] Running
	I1011 19:02:23.580252    5476 system_pods.go:61] "kube-apiserver-pause-375900" [4b6ca595-2579-4609-972b-3d352dbc9971] Running
	I1011 19:02:23.580252    5476 system_pods.go:61] "kube-controller-manager-pause-375900" [b6f72b4d-30c2-4679-9634-612b1e81dc5d] Running
	I1011 19:02:23.580252    5476 system_pods.go:61] "kube-proxy-6wv6x" [86829575-b97b-4960-a459-934aecb00dd5] Running
	I1011 19:02:23.580252    5476 system_pods.go:61] "kube-scheduler-pause-375900" [e9b9a50f-2ab2-414b-a8a4-51708cdfb4d4] Running
	I1011 19:02:23.580252    5476 system_pods.go:74] duration metric: took 134.9669ms to wait for pod list to return data ...
	I1011 19:02:23.580252    5476 default_sa.go:34] waiting for default service account to be created ...
	I1011 19:02:23.767255    5476 default_sa.go:45] found service account: "default"
	I1011 19:02:23.768271    5476 default_sa.go:55] duration metric: took 186.9941ms for default service account to be created ...
	I1011 19:02:23.768271    5476 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 19:02:23.974250    5476 system_pods.go:86] 6 kube-system pods found
	I1011 19:02:23.974250    5476 system_pods.go:89] "coredns-5dd5756b68-g2h9s" [6626c9fe-763e-46b0-a66a-5bd39e157d8d] Running
	I1011 19:02:23.974250    5476 system_pods.go:89] "etcd-pause-375900" [4bf74444-af82-451c-b1d3-36e322aebe0b] Running
	I1011 19:02:23.974250    5476 system_pods.go:89] "kube-apiserver-pause-375900" [4b6ca595-2579-4609-972b-3d352dbc9971] Running
	I1011 19:02:23.974250    5476 system_pods.go:89] "kube-controller-manager-pause-375900" [b6f72b4d-30c2-4679-9634-612b1e81dc5d] Running
	I1011 19:02:23.974250    5476 system_pods.go:89] "kube-proxy-6wv6x" [86829575-b97b-4960-a459-934aecb00dd5] Running
	I1011 19:02:23.974250    5476 system_pods.go:89] "kube-scheduler-pause-375900" [e9b9a50f-2ab2-414b-a8a4-51708cdfb4d4] Running
	I1011 19:02:23.974250    5476 system_pods.go:126] duration metric: took 205.9779ms to wait for k8s-apps to be running ...
	I1011 19:02:23.974250    5476 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 19:02:23.987285    5476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 19:02:24.018267    5476 system_svc.go:56] duration metric: took 44.017ms WaitForService to wait for kubelet.
	I1011 19:02:24.018267    5476 kubeadm.go:581] duration metric: took 2.5612295s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1011 19:02:24.018267    5476 node_conditions.go:102] verifying NodePressure condition ...
	I1011 19:02:24.176265    5476 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I1011 19:02:24.177257    5476 node_conditions.go:123] node cpu capacity is 16
	I1011 19:02:24.177257    5476 node_conditions.go:105] duration metric: took 158.9893ms to run NodePressure ...
	I1011 19:02:24.177257    5476 start.go:228] waiting for startup goroutines ...
	I1011 19:02:24.177257    5476 start.go:233] waiting for cluster config update ...
	I1011 19:02:24.177257    5476 start.go:242] writing updated cluster config ...
	I1011 19:02:24.194287    5476 ssh_runner.go:195] Run: rm -f paused
	I1011 19:02:24.357690    5476 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1011 19:02:24.361706    5476 out.go:177] * Done! kubectl is now configured to use "pause-375900" cluster and "default" namespace by default
	I1011 19:02:23.740247    1140 cli_runner.go:164] Run: docker container inspect old-k8s-version-796400 --format={{.State.Status}}
	I1011 19:02:23.930252    1140 machine.go:88] provisioning docker machine ...
	I1011 19:02:23.930252    1140 ubuntu.go:169] provisioning hostname "old-k8s-version-796400"
	I1011 19:02:23.938270    1140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-796400
	I1011 19:02:24.157247    1140 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:24.167269    1140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52899 <nil> <nil>}
	I1011 19:02:24.167269    1140 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-796400 && echo "old-k8s-version-796400" | sudo tee /etc/hostname
	I1011 19:02:24.403693    1140 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-796400
	
	I1011 19:02:24.412705    1140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-796400
	I1011 19:02:24.631461    1140 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:24.631461    1140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52899 <nil> <nil>}
	I1011 19:02:24.632457    1140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-796400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-796400/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-796400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 19:02:24.852463    1140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 19:02:24.852463    1140 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube2\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube2\minikube-integration\.minikube}
	I1011 19:02:24.853487    1140 ubuntu.go:177] setting up certificates
	I1011 19:02:24.853487    1140 provision.go:83] configureAuth start
	I1011 19:02:24.862458    1140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-796400
	I1011 19:02:25.089469    1140 provision.go:138] copyHostCerts
	I1011 19:02:25.090462    1140 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem, removing ...
	I1011 19:02:25.090462    1140 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.pem
	I1011 19:02:25.090462    1140 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1011 19:02:25.092454    1140 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem, removing ...
	I1011 19:02:25.092454    1140 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cert.pem
	I1011 19:02:25.092454    1140 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1011 19:02:25.094469    1140 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem, removing ...
	I1011 19:02:25.094469    1140 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\key.pem
	I1011 19:02:25.094469    1140 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem (1675 bytes)
	I1011 19:02:25.096467    1140 provision.go:112] generating server cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.old-k8s-version-796400 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-796400]
	I1011 19:02:25.320462    1140 provision.go:172] copyRemoteCerts
	I1011 19:02:25.339467    1140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 19:02:25.350452    1140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-796400
	I1011 19:02:25.583053    1140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52899 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\old-k8s-version-796400\id_rsa Username:docker}
	I1011 19:02:25.724696    1140 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 19:02:25.805938    1140 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I1011 19:02:25.867938    1140 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 19:02:21.306019    9448 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.9
	W1011 19:02:21.335032    9448 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.28.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1011 19:02:21.352027    9448 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1011 19:02:21.352027    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.5.9-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.9-0
	I1011 19:02:21.352027    9448 docker.go:318] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1011 19:02:21.356033    9448 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.2
	I1011 19:02:21.364027    9448 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.9-0
	I1011 19:02:21.411020    9448 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.2" does not exist at hash "cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce" in container runtime
	I1011 19:02:21.411020    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.28.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.28.2
	I1011 19:02:21.411020    9448 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1011 19:02:21.411020    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.10.1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.10.1
	I1011 19:02:21.411020    9448 docker.go:318] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1011 19:02:21.411020    9448 docker.go:318] Removing image: registry.k8s.io/kube-apiserver:v1.28.2
	I1011 19:02:21.422033    9448 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.28.2
	I1011 19:02:21.425016    9448 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1011 19:02:21.471041    9448 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.2
	I1011 19:02:21.557671    9448 docker.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1011 19:02:21.557671    9448 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1011 19:02:21.659335    9448 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.9
	I1011 19:02:21.671318    9448 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.2
	I1011 19:02:21.676316    9448 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.9-0
	I1011 19:02:21.676316    9448 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.2" needs transfer: "registry.k8s.io/kube-proxy:v1.28.2" does not exist at hash "c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0" in container runtime
	I1011 19:02:21.676316    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.28.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.28.2
	I1011 19:02:21.676316    9448 docker.go:318] Removing image: registry.k8s.io/kube-proxy:v1.28.2
	I1011 19:02:21.683329    9448 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9
	I1011 19:02:21.689360    9448 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.28.2
	I1011 19:02:21.694317    9448 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1011 19:02:21.754323    9448 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.28.2
	I1011 19:02:21.755322    9448 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.10.1
	I1011 19:02:21.755322    9448 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.2" does not exist at hash "55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57" in container runtime
	I1011 19:02:21.755322    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.28.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.28.2
	I1011 19:02:21.755322    9448 docker.go:318] Removing image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1011 19:02:21.772320    9448 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.28.2
	I1011 19:02:21.775329    9448 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.2
	I1011 19:02:21.779346    9448 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1011 19:02:23.457374    9448 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load": (1.8995711s)
	I1011 19:02:23.457374    9448 ssh_runner.go:235] Completed: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.2: (1.786048s)
	I1011 19:02:23.457458    9448 cache_images.go:315] Transferred and loaded C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 from cache
	I1011 19:02:23.457458    9448 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.2" does not exist at hash "7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8" in container runtime
	I1011 19:02:23.457564    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.28.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.28.2
	I1011 19:02:23.457564    9448 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9: (1.7742266s)
	I1011 19:02:23.457564    9448 docker.go:318] Removing image: registry.k8s.io/kube-scheduler:v1.28.2
	I1011 19:02:23.457564    9448 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.9: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.9': No such file or directory
	I1011 19:02:23.457674    9448 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: (1.7633488s)
	I1011 19:02:23.457823    9448 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.9-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.9-0': No such file or directory
	I1011 19:02:23.457823    9448 ssh_runner.go:235] Completed: docker rmi registry.k8s.io/kube-proxy:v1.28.2: (1.7684551s)
	I1011 19:02:23.457823    9448 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.9 --> /var/lib/minikube/images/pause_3.9 (322048 bytes)
	I1011 19:02:23.457823    9448 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.28.2
	I1011 19:02:23.457823    9448 ssh_runner.go:235] Completed: docker rmi registry.k8s.io/kube-controller-manager:v1.28.2: (1.6854957s)
	I1011 19:02:23.457823    9448 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.9-0 --> /var/lib/minikube/images/etcd_3.5.9-0 (102902784 bytes)
	I1011 19:02:23.457823    9448 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.2: (1.6824867s)
	I1011 19:02:23.457823    9448 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: (1.6784689s)
	I1011 19:02:23.457823    9448 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.28.2: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.28.2': No such file or directory
	I1011 19:02:23.457823    9448 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.28.2
	I1011 19:02:23.457823    9448 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.10.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.10.1': No such file or directory
	I1011 19:02:23.457823    9448 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.28.2 --> /var/lib/minikube/images/kube-apiserver_v1.28.2 (34671104 bytes)
	I1011 19:02:23.457823    9448 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.10.1 --> /var/lib/minikube/images/coredns_v1.10.1 (16193024 bytes)
	I1011 19:02:23.472767    9448 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.28.2
	I1011 19:02:23.481764    9448 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.2
	I1011 19:02:23.482747    9448 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.2
	I1011 19:02:23.624270    9448 docker.go:285] Loading image: /var/lib/minikube/images/pause_3.9
	I1011 19:02:23.624270    9448 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.9 | docker load"
	I1011 19:02:23.771253    9448 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.28.2
	I1011 19:02:23.771253    9448 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.28.2: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.28.2': No such file or directory
	I1011 19:02:23.771253    9448 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.28.2: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.28.2': No such file or directory
	I1011 19:02:23.771253    9448 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.28.2 --> /var/lib/minikube/images/kube-proxy_v1.28.2 (24561152 bytes)
	I1011 19:02:23.772258    9448 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.28.2 --> /var/lib/minikube/images/kube-controller-manager_v1.28.2 (33403392 bytes)
	I1011 19:02:23.786264    9448 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.2
	I1011 19:02:24.166262    9448 cache_images.go:315] Transferred and loaded C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.9 from cache
	I1011 19:02:24.336689    9448 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.28.2: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.28.2': No such file or directory
	I1011 19:02:24.336689    9448 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.28.2 --> /var/lib/minikube/images/kube-scheduler_v1.28.2 (18819072 bytes)
	
	* 
	* ==> Docker <==
	* Oct 11 19:01:28 pause-375900 cri-dockerd[4721]: time="2023-10-11T19:01:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7916b8c1fe0aa6349aa0a4e51327a2a7e7cc9d3a9c5c5de1d970b178194b6639/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 11 19:01:29 pause-375900 cri-dockerd[4721]: time="2023-10-11T19:01:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ffd4e4805972d18c06ba5637dbb2cb043af8cf8c9f9541426e785f8506a8061f/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 11 19:01:50 pause-375900 dockerd[4423]: time="2023-10-11T19:01:49.853844800Z" level=info msg="ignoring event" container=7916b8c1fe0aa6349aa0a4e51327a2a7e7cc9d3a9c5c5de1d970b178194b6639 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 19:01:50 pause-375900 dockerd[4423]: time="2023-10-11T19:01:49.853951200Z" level=info msg="ignoring event" container=46a29adb775e99ffbf85df2d1c7e1564cf011606f7d8f3055af3f7bd2f1327b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 19:01:50 pause-375900 dockerd[4423]: time="2023-10-11T19:01:49.854369600Z" level=info msg="ignoring event" container=2940af478bf4f0d96c255cb080794af6cfa35118bc2fd460bd91eb44bbabf19d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 19:01:50 pause-375900 dockerd[4423]: time="2023-10-11T19:01:49.855162500Z" level=info msg="ignoring event" container=19c537ee4c384da75f24c7695517673aaa6bbe7ef82f1c4791bf1338dc6c124f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 19:01:50 pause-375900 dockerd[4423]: time="2023-10-11T19:01:49.855292500Z" level=info msg="ignoring event" container=8cbb52bc46249598b5d0846ba76d5b82ac189d6a8362c3aaa09c50f640c678bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 19:01:50 pause-375900 dockerd[4423]: time="2023-10-11T19:01:49.855349900Z" level=info msg="ignoring event" container=b6ad1cd537881c4275d5e1aeb80f0b61c2c1f5a1c34765b2dffc8ab4e5465ecc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 19:01:50 pause-375900 dockerd[4423]: time="2023-10-11T19:01:49.855460900Z" level=info msg="ignoring event" container=e3fc1b46e1feedc3f1e31488df9ea2030aa650e2ca70dadd726e3bc614213b11 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 19:01:50 pause-375900 dockerd[4423]: time="2023-10-11T19:01:49.855518000Z" level=info msg="ignoring event" container=0f568baa0e8688720f159eca7cc486067efb673e31ab3aa267f2581a3f3842ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 19:01:50 pause-375900 dockerd[4423]: time="2023-10-11T19:01:49.863396100Z" level=info msg="ignoring event" container=ffd4e4805972d18c06ba5637dbb2cb043af8cf8c9f9541426e785f8506a8061f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 19:01:54 pause-375900 dockerd[4423]: time="2023-10-11T19:01:54.592521900Z" level=info msg="ignoring event" container=3a68e3e25c04267c77aff941e6b65fa079c9bfbcb4e408574e9594e032f6b4a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 19:01:55 pause-375900 dockerd[4423]: time="2023-10-11T19:01:55.097221000Z" level=info msg="ignoring event" container=6ae2fd93692b8ac56e11dc9dbac636d2c64214f49b410d0ef89f3e98a90c7a32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 19:01:56 pause-375900 cri-dockerd[4721]: time="2023-10-11T19:01:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2d5e3814ef9fe0069183c8d86b1a394a216aebda74eae5572439ee31ed7fd05f/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 11 19:01:57 pause-375900 cri-dockerd[4721]: time="2023-10-11T19:01:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6e37524766f7f807fe9be8a2ec7961cd222babe24ff720a64ad931d73e84f6a6/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 11 19:01:57 pause-375900 cri-dockerd[4721]: time="2023-10-11T19:01:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe93555e1943cfc4d8097c71b544f8cbbcc9a69b55af4e8e4a58b8155f6818a8/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 11 19:01:57 pause-375900 cri-dockerd[4721]: time="2023-10-11T19:01:57Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-g2h9s_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ffd4e4805972d18c06ba5637dbb2cb043af8cf8c9f9541426e785f8506a8061f\""
	Oct 11 19:01:57 pause-375900 dockerd[4423]: time="2023-10-11T19:01:57.586088700Z" level=info msg="ignoring event" container=8dd1f463809addd6a8a63a2a72be57a1fdca6b45a38620652840ce6ef61759dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 19:01:58 pause-375900 cri-dockerd[4721]: time="2023-10-11T19:01:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/303f8995cf3b6418ab9bbf8ac498ab20563a3ccfdfb4dcc62faa4223b24db15f/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 11 19:01:58 pause-375900 cri-dockerd[4721]: W1011 19:01:58.154485    4721 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Oct 11 19:01:58 pause-375900 cri-dockerd[4721]: time="2023-10-11T19:01:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1581ccc65d1ea76b43ccc98d5cf98f80319874055dffb204e146483f8d0b8000/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 11 19:01:58 pause-375900 cri-dockerd[4721]: time="2023-10-11T19:01:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/34b9fcffa3e832cab32b574b136fe77e072b9919a4f6eb2c12e006f057fce350/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 11 19:01:58 pause-375900 cri-dockerd[4721]: W1011 19:01:58.383541    4721 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Oct 11 19:01:58 pause-375900 cri-dockerd[4721]: W1011 19:01:58.460598    4721 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Oct 11 19:02:11 pause-375900 cri-dockerd[4721]: time="2023-10-11T19:02:11Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b93772cb7b694       ead0a4a53df89       18 seconds ago       Running             coredns                   2                   1581ccc65d1ea       coredns-5dd5756b68-g2h9s
	19db40dfaf81c       c120fed2beb84       18 seconds ago       Running             kube-proxy                2                   2d5e3814ef9fe       kube-proxy-6wv6x
	3aa4b3b7ee948       55f13c92defb1       27 seconds ago       Running             kube-controller-manager   2                   6e37524766f7f       kube-controller-manager-pause-375900
	87b2c2f958e07       cdcab12b2dd16       27 seconds ago       Running             kube-apiserver            2                   34b9fcffa3e83       kube-apiserver-pause-375900
	846b58faec370       73deb9a3f7025       27 seconds ago       Running             etcd                      2                   303f8995cf3b6       etcd-pause-375900
	29048537c048b       7a5d9d67a13f6       27 seconds ago       Running             kube-scheduler            2                   fe93555e1943c       kube-scheduler-pause-375900
	3a68e3e25c042       ead0a4a53df89       About a minute ago   Exited              coredns                   1                   ffd4e4805972d       coredns-5dd5756b68-g2h9s
	46a29adb775e9       55f13c92defb1       About a minute ago   Exited              kube-controller-manager   1                   0f568baa0e868       kube-controller-manager-pause-375900
	8cbb52bc46249       7a5d9d67a13f6       About a minute ago   Exited              kube-scheduler            1                   7916b8c1fe0aa       kube-scheduler-pause-375900
	6ae2fd93692b8       73deb9a3f7025       About a minute ago   Exited              etcd                      1                   b6ad1cd537881       etcd-pause-375900
	e3fc1b46e1fee       c120fed2beb84       About a minute ago   Exited              kube-proxy                1                   19c537ee4c384       kube-proxy-6wv6x
	8dd1f463809ad       cdcab12b2dd16       About a minute ago   Exited              kube-apiserver            1                   2940af478bf4f       kube-apiserver-pause-375900
	
	* 
	* ==> coredns [3a68e3e25c04] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54493 - 37301 "HINFO IN 8378913592617583348.8010908138851535645. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.055823s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [b93772cb7b69] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37703 - 54270 "HINFO IN 5490492823263349566.3533110061362470414. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0806851s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-375900
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-375900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91587593de480e6b788546c040ff38fdb52a5106
	                    minikube.k8s.io/name=pause-375900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_11T19_00_30_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 11 Oct 2023 19:00:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-375900
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 11 Oct 2023 19:02:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 11 Oct 2023 19:02:11 +0000   Wed, 11 Oct 2023 19:00:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 11 Oct 2023 19:02:11 +0000   Wed, 11 Oct 2023 19:00:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 11 Oct 2023 19:02:11 +0000   Wed, 11 Oct 2023 19:00:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 11 Oct 2023 19:02:11 +0000   Wed, 11 Oct 2023 19:00:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-375900
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 b51ce203fd724b97a2c9f7c2c29a9e54
	  System UUID:                b51ce203fd724b97a2c9f7c2c29a9e54
	  Boot ID:                    210bf8b0-efd3-412e-9dae-f952437eab55
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-g2h9s                100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     110s
	  kube-system                 etcd-pause-375900                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         2m5s
	  kube-system                 kube-apiserver-pause-375900             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  kube-system                 kube-controller-manager-pause-375900    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  kube-system                 kube-proxy-6wv6x                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kube-scheduler-pause-375900             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 106s                   kube-proxy       
	  Normal  Starting                 16s                    kube-proxy       
	  Normal  Starting                 51s                    kube-proxy       
	  Normal  Starting                 2m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m21s (x7 over 2m27s)  kubelet          Node pause-375900 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m16s (x8 over 2m27s)  kubelet          Node pause-375900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m16s (x8 over 2m27s)  kubelet          Node pause-375900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s                   kubelet          Node pause-375900 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m2s                   kubelet          Node pause-375900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m2s                   kubelet          Node pause-375900 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m2s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m1s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           111s                   node-controller  Node pause-375900 event: Registered Node pause-375900 in Controller
	  Normal  Starting                 29s                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28s (x8 over 29s)      kubelet          Node pause-375900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s (x8 over 29s)      kubelet          Node pause-375900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s (x7 over 29s)      kubelet          Node pause-375900 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5s                     node-controller  Node pause-375900 event: Registered Node pause-375900 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct11 18:31] WSL2: Performing memory compaction.
	[Oct11 18:33] WSL2: Performing memory compaction.
	[Oct11 18:34] WSL2: Performing memory compaction.
	[Oct11 18:36] WSL2: Performing memory compaction.
	[Oct11 18:37] WSL2: Performing memory compaction.
	[Oct11 18:38] WSL2: Performing memory compaction.
	[Oct11 18:39] WSL2: Performing memory compaction.
	[Oct11 18:41] WSL2: Performing memory compaction.
	[Oct11 18:42] WSL2: Performing memory compaction.
	[Oct11 18:43] WSL2: Performing memory compaction.
	[Oct11 18:44] WSL2: Performing memory compaction.
	[Oct11 18:46] WSL2: Performing memory compaction.
	[Oct11 18:47] WSL2: Performing memory compaction.
	[Oct11 18:48] WSL2: Performing memory compaction.
	[Oct11 18:49] WSL2: Performing memory compaction.
	[Oct11 18:50] WSL2: Performing memory compaction.
	[Oct11 18:51] WSL2: Performing memory compaction.
	[Oct11 18:53] WSL2: Performing memory compaction.
	[Oct11 18:54] WSL2: Performing memory compaction.
	[  +8.722576] process 'docker/tmp/qemu-check600830759/check' started with executable stack
	[Oct11 18:55] WSL2: Performing memory compaction.
	[Oct11 18:56] WSL2: Performing memory compaction.
	[Oct11 18:58] WSL2: Performing memory compaction.
	[Oct11 19:00] WSL2: Performing memory compaction.
	[Oct11 19:01] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [6ae2fd93692b] <==
	* {"level":"info","ts":"2023-10-11T19:01:49.236971Z","caller":"traceutil/trace.go:171","msg":"trace[563901894] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:420; }","duration":"125.072ms","start":"2023-10-11T19:01:49.111876Z","end":"2023-10-11T19:01:49.236948Z","steps":["trace[563901894] 'agreement among raft nodes before linearized reading'  (duration: 122.8334ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-11T19:01:49.236988Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-11T19:01:48.330904Z","time spent":"906.0642ms","remote":"127.0.0.1:55918","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2023-10-11T19:01:49.237184Z","caller":"traceutil/trace.go:171","msg":"trace[427941167] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-375900; range_end:; response_count:1; response_revision:420; }","duration":"2.6343543s","start":"2023-10-11T19:01:46.602784Z","end":"2023-10-11T19:01:49.237138Z","steps":["trace[427941167] 'agreement among raft nodes before linearized reading'  (duration: 2.6319072s)"],"step_count":1}
	{"level":"warn","ts":"2023-10-11T19:01:49.237346Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-11T19:01:46.602768Z","time spent":"2.6345622s","remote":"127.0.0.1:55930","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5215,"request content":"key:\"/registry/pods/kube-system/etcd-pause-375900\" "}
	{"level":"info","ts":"2023-10-11T19:01:49.237434Z","caller":"traceutil/trace.go:171","msg":"trace[1462386019] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:420; }","duration":"5.5658794s","start":"2023-10-11T19:01:43.67153Z","end":"2023-10-11T19:01:49.237409Z","steps":["trace[1462386019] 'agreement among raft nodes before linearized reading'  (duration: 5.5632348s)"],"step_count":1}
	{"level":"info","ts":"2023-10-11T19:01:49.34445Z","caller":"traceutil/trace.go:171","msg":"trace[696375484] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"102.6359ms","start":"2023-10-11T19:01:49.241787Z","end":"2023-10-11T19:01:49.344423Z","steps":["trace[696375484] 'process raft request'  (duration: 91.8263ms)","trace[696375484] 'compare'  (duration: 10.4733ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-11T19:01:49.452367Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-11T19:01:49.452588Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-375900","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"info","ts":"2023-10-11T19:01:49.452785Z","caller":"traceutil/trace.go:171","msg":"trace[29268100] linearizableReadLoop","detail":"{readStateIndex:451; appliedIndex:448; }","duration":"104.3291ms","start":"2023-10-11T19:01:49.34841Z","end":"2023-10-11T19:01:49.452739Z","steps":["trace[29268100] 'read index received'  (duration: 103.9734ms)","trace[29268100] 'applied index is now lower than readState.Index'  (duration: 352.5µs)"],"step_count":2}
	{"level":"warn","ts":"2023-10-11T19:01:49.452943Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-11T19:01:49.453131Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-11T19:01:49.453185Z","caller":"traceutil/trace.go:171","msg":"trace[176465660] transaction","detail":"{read_only:false; response_revision:422; number_of_response:1; }","duration":"205.3632ms","start":"2023-10-11T19:01:49.247806Z","end":"2023-10-11T19:01:49.453169Z","steps":["trace[176465660] 'process raft request'  (duration: 204.6517ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-11T19:01:49.453323Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-11T19:01:49.453485Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-11T19:01:49.453535Z","caller":"v3rpc/watch.go:473","msg":"failed to send watch response to gRPC stream","error":"rpc error: code = Unavailable desc = transport is closing"}
	{"level":"info","ts":"2023-10-11T19:01:49.453611Z","caller":"traceutil/trace.go:171","msg":"trace[1172837418] transaction","detail":"{read_only:false; response_revision:423; number_of_response:1; }","duration":"191.3387ms","start":"2023-10-11T19:01:49.262258Z","end":"2023-10-11T19:01:49.453596Z","steps":["trace[1172837418] 'process raft request'  (duration: 190.3479ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-11T19:01:49.453675Z","caller":"traceutil/trace.go:171","msg":"trace[1618179482] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"176.1741ms","start":"2023-10-11T19:01:49.27749Z","end":"2023-10-11T19:01:49.453665Z","steps":["trace[1618179482] 'process raft request'  (duration: 175.1833ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-11T19:01:49.453802Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.3892ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:basic-user\" ","response":"range_response_count:1 size:678"}
	{"level":"info","ts":"2023-10-11T19:01:49.453912Z","caller":"traceutil/trace.go:171","msg":"trace[1847343109] range","detail":"{range_begin:/registry/clusterroles/system:basic-user; range_end:; response_count:1; response_revision:424; }","duration":"105.5135ms","start":"2023-10-11T19:01:49.348384Z","end":"2023-10-11T19:01:49.453897Z","steps":["trace[1847343109] 'agreement among raft nodes before linearized reading'  (duration: 105.3295ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-11T19:01:49.454027Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.7519ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/kube-apiserver-pause-375900.178d22c66f462f9c\" ","response":"range_response_count:1 size:851"}
	{"level":"info","ts":"2023-10-11T19:01:49.454065Z","caller":"traceutil/trace.go:171","msg":"trace[1063206093] range","detail":"{range_begin:/registry/events/kube-system/kube-apiserver-pause-375900.178d22c66f462f9c; range_end:; response_count:1; response_revision:424; }","duration":"103.7914ms","start":"2023-10-11T19:01:49.35026Z","end":"2023-10-11T19:01:49.454052Z","steps":["trace[1063206093] 'agreement among raft nodes before linearized reading'  (duration: 103.7106ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-11T19:01:49.560555Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2023-10-11T19:01:55.068638Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-10-11T19:01:55.069918Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-10-11T19:01:55.070061Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-375900","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	* 
	* ==> etcd [846b58faec37] <==
	* {"level":"info","ts":"2023-10-11T19:02:12.117067Z","caller":"traceutil/trace.go:171","msg":"trace[2134008590] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"645.8142ms","start":"2023-10-11T19:02:11.471224Z","end":"2023-10-11T19:02:12.117039Z","steps":["trace[2134008590] 'process raft request'  (duration: 531.3311ms)","trace[2134008590] 'compare'  (duration: 112.7161ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-11T19:02:12.117279Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"632.0826ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" ","response":"range_response_count:1 size:3021"}
	{"level":"warn","ts":"2023-10-11T19:02:12.117323Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-11T19:02:11.471136Z","time spent":"646.0808ms","remote":"127.0.0.1:57002","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4388,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-375900\" mod_revision:420 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-375900\" value_size:4326 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-375900\" > >"}
	{"level":"info","ts":"2023-10-11T19:02:12.117357Z","caller":"traceutil/trace.go:171","msg":"trace[1356316456] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:1; response_revision:428; }","duration":"632.1738ms","start":"2023-10-11T19:02:11.48516Z","end":"2023-10-11T19:02:12.117334Z","steps":["trace[1356316456] 'agreement among raft nodes before linearized reading'  (duration: 631.9492ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-11T19:02:12.117378Z","caller":"traceutil/trace.go:171","msg":"trace[345143402] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"623.2183ms","start":"2023-10-11T19:02:11.494139Z","end":"2023-10-11T19:02:12.117358Z","steps":["trace[345143402] 'process raft request'  (duration: 622.3908ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-11T19:02:12.11741Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-11T19:02:11.485149Z","time spent":"632.2456ms","remote":"127.0.0.1:56982","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":1,"response size":3044,"request content":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" "}
	{"level":"warn","ts":"2023-10-11T19:02:12.117498Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-11T19:02:11.494122Z","time spent":"623.2937ms","remote":"127.0.0.1:57000","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4375,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/pause-375900\" mod_revision:368 > success:<request_put:<key:\"/registry/minions/pause-375900\" value_size:4337 >> failure:<request_range:<key:\"/registry/minions/pause-375900\" > >"}
	{"level":"info","ts":"2023-10-11T19:02:13.35551Z","caller":"traceutil/trace.go:171","msg":"trace[1838844536] linearizableReadLoop","detail":"{readStateIndex:479; appliedIndex:478; }","duration":"100.2773ms","start":"2023-10-11T19:02:13.255205Z","end":"2023-10-11T19:02:13.355482Z","steps":["trace[1838844536] 'read index received'  (duration: 19.0497ms)","trace[1838844536] 'applied index is now lower than readState.Index'  (duration: 81.2242ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-11T19:02:13.355633Z","caller":"traceutil/trace.go:171","msg":"trace[914169653] transaction","detail":"{read_only:false; response_revision:448; number_of_response:1; }","duration":"176.1766ms","start":"2023-10-11T19:02:13.17936Z","end":"2023-10-11T19:02:13.355542Z","steps":["trace[914169653] 'process raft request'  (duration: 94.949ms)","trace[914169653] 'compare'  (duration: 80.9741ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-11T19:02:13.355738Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.5374ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:expand-controller\" ","response":"range_response_count:1 size:880"}
	{"level":"info","ts":"2023-10-11T19:02:13.355799Z","caller":"traceutil/trace.go:171","msg":"trace[55879089] range","detail":"{range_begin:/registry/clusterroles/system:controller:expand-controller; range_end:; response_count:1; response_revision:448; }","duration":"100.6004ms","start":"2023-10-11T19:02:13.255169Z","end":"2023-10-11T19:02:13.355769Z","steps":["trace[55879089] 'agreement among raft nodes before linearized reading'  (duration: 100.4497ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-11T19:02:15.90109Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":9722580140622618014,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2023-10-11T19:02:16.401603Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":9722580140622618014,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2023-10-11T19:02:16.736524Z","caller":"wal/wal.go:805","msg":"slow fdatasync","took":"1.2920234s","expected-duration":"1s"}
	{"level":"info","ts":"2023-10-11T19:02:16.741605Z","caller":"traceutil/trace.go:171","msg":"trace[1014811420] linearizableReadLoop","detail":"{readStateIndex:530; appliedIndex:529; }","duration":"1.3416101s","start":"2023-10-11T19:02:15.399654Z","end":"2023-10-11T19:02:16.741264Z","steps":["trace[1014811420] 'read index received'  (duration: 1.339411s)","trace[1014811420] 'applied index is now lower than readState.Index'  (duration: 2.1964ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-11T19:02:16.741603Z","caller":"traceutil/trace.go:171","msg":"trace[317881645] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"1.3419286s","start":"2023-10-11T19:02:15.399586Z","end":"2023-10-11T19:02:16.741515Z","steps":["trace[317881645] 'process raft request'  (duration: 1.3399413s)"],"step_count":1}
	{"level":"warn","ts":"2023-10-11T19:02:16.742374Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.3424521s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-375900\" ","response":"range_response_count:1 size:5306"}
	{"level":"info","ts":"2023-10-11T19:02:16.74253Z","caller":"traceutil/trace.go:171","msg":"trace[827837724] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-375900; range_end:; response_count:1; response_revision:487; }","duration":"1.3428793s","start":"2023-10-11T19:02:15.399633Z","end":"2023-10-11T19:02:16.742512Z","steps":["trace[827837724] 'agreement among raft nodes before linearized reading'  (duration: 1.3420964s)"],"step_count":1}
	{"level":"warn","ts":"2023-10-11T19:02:16.74263Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-11T19:02:15.39962Z","time spent":"1.3429781s","remote":"127.0.0.1:57002","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5329,"request content":"key:\"/registry/pods/kube-system/etcd-pause-375900\" "}
	{"level":"warn","ts":"2023-10-11T19:02:16.74245Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"360.3249ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-11T19:02:16.742853Z","caller":"traceutil/trace.go:171","msg":"trace[1832068937] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:487; }","duration":"360.6066ms","start":"2023-10-11T19:02:16.382091Z","end":"2023-10-11T19:02:16.742715Z","steps":["trace[1832068937] 'agreement among raft nodes before linearized reading'  (duration: 360.255ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-11T19:02:16.742865Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-11T19:02:15.399553Z","time spent":"1.3425813s","remote":"127.0.0.1:57002","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7340,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-375900\" mod_revision:485 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-375900\" value_size:7278 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-375900\" > >"}
	{"level":"warn","ts":"2023-10-11T19:02:16.742913Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-11T19:02:16.382071Z","time spent":"360.8251ms","remote":"127.0.0.1:57026","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-10-11T19:02:30.569871Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.1671ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-11T19:02:30.570063Z","caller":"traceutil/trace.go:171","msg":"trace[802662111] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:499; }","duration":"181.3732ms","start":"2023-10-11T19:02:30.388666Z","end":"2023-10-11T19:02:30.570039Z","steps":["trace[802662111] 'range keys from in-memory index tree'  (duration: 181.0413ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  19:02:31 up  1:16,  0 users,  load average: 9.01, 8.56, 5.45
	Linux pause-375900 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kube-apiserver [87b2c2f958e0] <==
	* Trace[128862297]: ["GuaranteedUpdate etcd3" audit-id:808c9a53-bb68-428b-ab21-5dff646d982a,key:/minions/pause-375900,type:*core.Node,resource:nodes 637ms (19:02:11.482)
	Trace[128862297]:  ---"Txn call completed" 625ms (19:02:12.118)]
	Trace[128862297]: ---"Object stored in database" 626ms (19:02:12.118)
	Trace[128862297]: [638.0845ms] [638.0845ms] END
	I1011 19:02:12.120182       1 trace.go:236] Trace[1833827336]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:2a8a8ca5-a359-4b8c-9805-787bddfac120,client:192.168.85.2,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-375900/status,user-agent:kubelet/v1.28.2 (linux/amd64) kubernetes/89a4ea3,verb:PATCH (11-Oct-2023 19:02:11.451) (total time: 668ms):
	Trace[1833827336]: ["GuaranteedUpdate etcd3" audit-id:2a8a8ca5-a359-4b8c-9805-787bddfac120,key:/pods/kube-system/kube-scheduler-pause-375900,type:*core.Pod,resource:pods 667ms (19:02:11.452)
	Trace[1833827336]:  ---"Txn call completed" 648ms (19:02:12.118)]
	Trace[1833827336]: ---"About to check admission control" 17ms (19:02:11.469)
	Trace[1833827336]: ---"Object stored in database" 649ms (19:02:12.118)
	Trace[1833827336]: [668.1182ms] [668.1182ms] END
	I1011 19:02:15.053185       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1011 19:02:15.080338       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1011 19:02:15.208169       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1011 19:02:15.285029       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1011 19:02:15.312107       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1011 19:02:16.748086       1 trace.go:236] Trace[1550518484]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:54105783-65c2-42ef-944d-0ecc2f252337,client:192.168.85.2,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-375900/status,user-agent:kubelet/v1.28.2 (linux/amd64) kubernetes/89a4ea3,verb:PATCH (11-Oct-2023 19:02:15.392) (total time: 1355ms):
	Trace[1550518484]: ["GuaranteedUpdate etcd3" audit-id:54105783-65c2-42ef-944d-0ecc2f252337,key:/pods/kube-system/kube-apiserver-pause-375900,type:*core.Pod,resource:pods 1355ms (19:02:15.392)
	Trace[1550518484]:  ---"Txn call completed" 1346ms (19:02:16.745)]
	Trace[1550518484]: ---"Object stored in database" 1348ms (19:02:16.747)
	Trace[1550518484]: [1.3552742s] [1.3552742s] END
	I1011 19:02:16.748405       1 trace.go:236] Trace[1051856289]: "Get" accept:application/json, */*,audit-id:b794ef53-4bb7-48fd-af0e-5e1628609e9e,client:192.168.85.1,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/etcd-pause-375900,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:GET (11-Oct-2023 19:02:15.398) (total time: 1349ms):
	Trace[1051856289]: ---"About to write a response" 1348ms (19:02:16.746)
	Trace[1051856289]: [1.3495733s] [1.3495733s] END
	I1011 19:02:25.762350       1 controller.go:624] quota admission added evaluator for: endpoints
	I1011 19:02:25.770692       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [8dd1f463809a] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 19:01:55.176922       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 19:01:55.198664       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 19:01:55.359395       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [3aa4b3b7ee94] <==
	* I1011 19:02:25.663992       1 shared_informer.go:318] Caches are synced for node
	I1011 19:02:25.664032       1 shared_informer.go:318] Caches are synced for namespace
	I1011 19:02:25.668044       1 shared_informer.go:318] Caches are synced for daemon sets
	I1011 19:02:25.668229       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1011 19:02:25.668314       1 taint_manager.go:211] "Sending events to api server"
	I1011 19:02:25.668023       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1011 19:02:25.668483       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-375900"
	I1011 19:02:25.668669       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1011 19:02:25.668112       1 event.go:307] "Event occurred" object="pause-375900" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-375900 event: Registered Node pause-375900 in Controller"
	I1011 19:02:25.668915       1 range_allocator.go:174] "Sending events to api server"
	I1011 19:02:25.668962       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1011 19:02:25.668975       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1011 19:02:25.668989       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1011 19:02:25.752008       1 shared_informer.go:318] Caches are synced for attach detach
	I1011 19:02:25.752703       1 shared_informer.go:318] Caches are synced for resource quota
	I1011 19:02:25.756519       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1011 19:02:25.781060       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1011 19:02:25.858230       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1011 19:02:25.858366       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1011 19:02:25.858390       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1011 19:02:25.858413       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1011 19:02:25.858608       1 shared_informer.go:318] Caches are synced for resource quota
	I1011 19:02:26.157975       1 shared_informer.go:318] Caches are synced for garbage collector
	I1011 19:02:26.166826       1 shared_informer.go:318] Caches are synced for garbage collector
	I1011 19:02:26.166976       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [46a29adb775e] <==
	* I1011 19:01:33.313301       1 serving.go:348] Generated self-signed cert in-memory
	I1011 19:01:33.754258       1 controllermanager.go:189] "Starting" version="v1.28.2"
	I1011 19:01:33.754409       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 19:01:33.757529       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1011 19:01:33.757580       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1011 19:01:33.758134       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I1011 19:01:33.758376       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E1011 19:01:49.241727       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/start-system-namespaces-contro
ller ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	* 
	* ==> kube-proxy [19db40dfaf81] <==
	* I1011 19:02:13.573061       1 server_others.go:69] "Using iptables proxy"
	I1011 19:02:13.699987       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1011 19:02:13.793674       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1011 19:02:13.851304       1 server_others.go:152] "Using iptables Proxier"
	I1011 19:02:13.851484       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1011 19:02:13.851502       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1011 19:02:13.851550       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1011 19:02:13.852374       1 server.go:846] "Version info" version="v1.28.2"
	I1011 19:02:13.852491       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 19:02:13.854311       1 config.go:188] "Starting service config controller"
	I1011 19:02:13.854330       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1011 19:02:13.854433       1 config.go:315] "Starting node config controller"
	I1011 19:02:13.854467       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1011 19:02:13.854509       1 config.go:97] "Starting endpoint slice config controller"
	I1011 19:02:13.854526       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1011 19:02:13.955113       1 shared_informer.go:318] Caches are synced for node config
	I1011 19:02:13.955583       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1011 19:02:13.955690       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-proxy [e3fc1b46e1fe] <==
	* I1011 19:01:29.069795       1 server_others.go:69] "Using iptables proxy"
	E1011 19:01:29.152445       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-375900": dial tcp 192.168.85.2:8443: connect: connection refused
	I1011 19:01:39.130708       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1011 19:01:39.198253       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1011 19:01:39.254944       1 server_others.go:152] "Using iptables Proxier"
	I1011 19:01:39.255176       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1011 19:01:39.255191       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1011 19:01:39.255240       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1011 19:01:39.256879       1 server.go:846] "Version info" version="v1.28.2"
	I1011 19:01:39.256979       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 19:01:39.259709       1 config.go:188] "Starting service config controller"
	I1011 19:01:39.259839       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1011 19:01:39.261956       1 config.go:97] "Starting endpoint slice config controller"
	I1011 19:01:39.262112       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1011 19:01:39.262625       1 config.go:315] "Starting node config controller"
	I1011 19:01:39.262781       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1011 19:01:39.362414       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1011 19:01:39.362575       1 shared_informer.go:318] Caches are synced for service config
	I1011 19:01:39.363088       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [29048537c048] <==
	* I1011 19:02:08.072947       1 serving.go:348] Generated self-signed cert in-memory
	I1011 19:02:11.480091       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1011 19:02:11.480165       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 19:02:12.122039       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1011 19:02:12.122084       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1011 19:02:12.122339       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1011 19:02:12.122365       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1011 19:02:12.122426       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1011 19:02:12.122448       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1011 19:02:12.123443       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1011 19:02:12.126363       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1011 19:02:12.222582       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1011 19:02:12.251602       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1011 19:02:12.251935       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [8cbb52bc4624] <==
	* I1011 19:01:32.290306       1 serving.go:348] Generated self-signed cert in-memory
	W1011 19:01:35.352673       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1011 19:01:35.352720       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1011 19:01:35.352742       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1011 19:01:35.352756       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1011 19:01:35.475728       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1011 19:01:35.475934       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 19:01:35.478730       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1011 19:01:35.479138       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1011 19:01:35.479222       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1011 19:01:35.479265       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1011 19:01:35.579841       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1011 19:01:49.456664       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I1011 19:01:49.456885       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1011 19:01:49.457764       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1011 19:01:49.458109       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* Oct 11 19:02:02 pause-375900 kubelet[6863]: I1011 19:02:02.808989    6863 kubelet_node_status.go:70] "Attempting to register node" node="pause-375900"
	Oct 11 19:02:02 pause-375900 kubelet[6863]: E1011 19:02:02.810057    6863 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.85.2:8443: connect: connection refused" node="pause-375900"
	Oct 11 19:02:02 pause-375900 kubelet[6863]: W1011 19:02:02.852465    6863 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	Oct 11 19:02:02 pause-375900 kubelet[6863]: E1011 19:02:02.852754    6863 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	Oct 11 19:02:02 pause-375900 kubelet[6863]: I1011 19:02:02.885189    6863 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6945e4707c4df6551d8db4c0565a0dfa48ebe53e5a7604933a086394a40530a5"
	Oct 11 19:02:04 pause-375900 kubelet[6863]: E1011 19:02:04.162928    6863 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-375900?timeout=10s\": dial tcp 192.168.85.2:8443: connect: connection refused" interval="3.2s"
	Oct 11 19:02:04 pause-375900 kubelet[6863]: W1011 19:02:04.453591    6863 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	Oct 11 19:02:04 pause-375900 kubelet[6863]: E1011 19:02:04.453789    6863 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	Oct 11 19:02:04 pause-375900 kubelet[6863]: I1011 19:02:04.556047    6863 kubelet_node_status.go:70] "Attempting to register node" node="pause-375900"
	Oct 11 19:02:04 pause-375900 kubelet[6863]: E1011 19:02:04.557164    6863 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.85.2:8443: connect: connection refused" node="pause-375900"
	Oct 11 19:02:04 pause-375900 kubelet[6863]: W1011 19:02:04.663256    6863 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	Oct 11 19:02:04 pause-375900 kubelet[6863]: E1011 19:02:04.663515    6863 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	Oct 11 19:02:07 pause-375900 kubelet[6863]: I1011 19:02:07.776328    6863 kubelet_node_status.go:70] "Attempting to register node" node="pause-375900"
	Oct 11 19:02:10 pause-375900 kubelet[6863]: I1011 19:02:10.156694    6863 apiserver.go:52] "Watching apiserver"
	Oct 11 19:02:10 pause-375900 kubelet[6863]: I1011 19:02:10.164410    6863 topology_manager.go:215] "Topology Admit Handler" podUID="6626c9fe-763e-46b0-a66a-5bd39e157d8d" podNamespace="kube-system" podName="coredns-5dd5756b68-g2h9s"
	Oct 11 19:02:10 pause-375900 kubelet[6863]: I1011 19:02:10.165200    6863 topology_manager.go:215] "Topology Admit Handler" podUID="86829575-b97b-4960-a459-934aecb00dd5" podNamespace="kube-system" podName="kube-proxy-6wv6x"
	Oct 11 19:02:10 pause-375900 kubelet[6863]: I1011 19:02:10.259314    6863 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Oct 11 19:02:10 pause-375900 kubelet[6863]: I1011 19:02:10.354472    6863 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86829575-b97b-4960-a459-934aecb00dd5-lib-modules\") pod \"kube-proxy-6wv6x\" (UID: \"86829575-b97b-4960-a459-934aecb00dd5\") " pod="kube-system/kube-proxy-6wv6x"
	Oct 11 19:02:10 pause-375900 kubelet[6863]: I1011 19:02:10.354671    6863 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86829575-b97b-4960-a459-934aecb00dd5-xtables-lock\") pod \"kube-proxy-6wv6x\" (UID: \"86829575-b97b-4960-a459-934aecb00dd5\") " pod="kube-system/kube-proxy-6wv6x"
	Oct 11 19:02:11 pause-375900 kubelet[6863]: I1011 19:02:11.450961    6863 kubelet_node_status.go:108] "Node was previously registered" node="pause-375900"
	Oct 11 19:02:11 pause-375900 kubelet[6863]: I1011 19:02:11.451258    6863 kubelet_node_status.go:73] "Successfully registered node" node="pause-375900"
	Oct 11 19:02:11 pause-375900 kubelet[6863]: I1011 19:02:11.455639    6863 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 11 19:02:11 pause-375900 kubelet[6863]: I1011 19:02:11.458782    6863 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 11 19:02:11 pause-375900 kubelet[6863]: I1011 19:02:11.667722    6863 scope.go:117] "RemoveContainer" containerID="3a68e3e25c04267c77aff941e6b65fa079c9bfbcb4e408574e9594e032f6b4a7"
	Oct 11 19:02:11 pause-375900 kubelet[6863]: I1011 19:02:11.667901    6863 scope.go:117] "RemoveContainer" containerID="e3fc1b46e1feedc3f1e31488df9ea2030aa650e2ca70dadd726e3bc614213b11"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 19:02:27.133452   10344 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-375900 -n pause-375900
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-375900 -n pause-375900: (1.5823201s)
helpers_test.go:261: (dbg) Run:  kubectl --context pause-375900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-375900
helpers_test.go:235: (dbg) docker inspect pause-375900:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2f79efe94b89c412d0a943e62476d06039d3dfc2b40217207963c94dd6629c00",
	        "Created": "2023-10-11T18:59:23.0108246Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 223487,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-11T18:59:23.7085463Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:94671ba3754e2c6976414eaf20a0c7861a5d2f9fc631e1161e8ab0ded9062c52",
	        "ResolvConfPath": "/var/lib/docker/containers/2f79efe94b89c412d0a943e62476d06039d3dfc2b40217207963c94dd6629c00/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2f79efe94b89c412d0a943e62476d06039d3dfc2b40217207963c94dd6629c00/hostname",
	        "HostsPath": "/var/lib/docker/containers/2f79efe94b89c412d0a943e62476d06039d3dfc2b40217207963c94dd6629c00/hosts",
	        "LogPath": "/var/lib/docker/containers/2f79efe94b89c412d0a943e62476d06039d3dfc2b40217207963c94dd6629c00/2f79efe94b89c412d0a943e62476d06039d3dfc2b40217207963c94dd6629c00-json.log",
	        "Name": "/pause-375900",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-375900:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-375900",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6ad71f9dafc17ce1267f3b3e3222686fe47340dfa256b304581f78eaef6347c2-init/diff:/var/lib/docker/overlay2/6a818081599e04504e41e5c7d63b7e52f1ec769a66e42764d0a42ce267813803/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ad71f9dafc17ce1267f3b3e3222686fe47340dfa256b304581f78eaef6347c2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ad71f9dafc17ce1267f3b3e3222686fe47340dfa256b304581f78eaef6347c2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ad71f9dafc17ce1267f3b3e3222686fe47340dfa256b304581f78eaef6347c2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-375900",
	                "Source": "/var/lib/docker/volumes/pause-375900/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-375900",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-375900",
	                "name.minikube.sigs.k8s.io": "pause-375900",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "259bda10ce091715fd41c6598f23ad9dcdc94890aebdf17fc6ee20a352f88bb4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52535"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52536"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52537"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52533"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52534"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/259bda10ce09",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-375900": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2f79efe94b89",
	                        "pause-375900"
	                    ],
	                    "NetworkID": "73597507752d28f00176e111983f096c5dbcf3c5c87d646a205ffcace72b7fe9",
	                    "EndpointID": "c536ec0913e823bbcfa8a7c3fd544d890760234ad9b473c74526f2236a867914",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-375900 -n pause-375900
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-375900 -n pause-375900: (1.6038053s)
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-375900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-375900 logs -n 25: (3.1716307s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                         Args                         |         Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p cilium-035800 sudo                                | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | systemctl cat docker                                 |                          |                   |         |                     |                     |
	|         | --no-pager                                           |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo cat                            | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | /etc/docker/daemon.json                              |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo docker                         | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | system info                                          |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo                                | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | systemctl status cri-docker                          |                          |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo                                | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | systemctl cat cri-docker                             |                          |                   |         |                     |                     |
	|         | --no-pager                                           |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo cat                            | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo cat                            | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo                                | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | cri-dockerd --version                                |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo                                | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | systemctl status containerd                          |                          |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo                                | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | systemctl cat containerd                             |                          |                   |         |                     |                     |
	|         | --no-pager                                           |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo cat                            | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo cat                            | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | /etc/containerd/config.toml                          |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo                                | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | containerd config dump                               |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo                                | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | systemctl status crio --all                          |                          |                   |         |                     |                     |
	|         | --full --no-pager                                    |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo                                | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo find                           | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                          |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                          |                   |         |                     |                     |
	| ssh     | -p cilium-035800 sudo crio                           | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | config                                               |                          |                   |         |                     |                     |
	| delete  | -p cilium-035800                                     | cilium-035800            | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC | 11 Oct 23 19:01 UTC |
	| start   | -p force-systemd-env-769500                          | force-systemd-env-769500 | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | --memory=2048                                        |                          |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                               |                          |                   |         |                     |                     |
	|         | --driver=docker                                      |                          |                   |         |                     |                     |
	| ssh     | docker-flags-068100 ssh                              | docker-flags-068100      | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC | 11 Oct 23 19:01 UTC |
	|         | sudo systemctl show docker                           |                          |                   |         |                     |                     |
	|         | --property=Environment                               |                          |                   |         |                     |                     |
	|         | --no-pager                                           |                          |                   |         |                     |                     |
	| ssh     | docker-flags-068100 ssh                              | docker-flags-068100      | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC | 11 Oct 23 19:01 UTC |
	|         | sudo systemctl show docker                           |                          |                   |         |                     |                     |
	|         | --property=ExecStart                                 |                          |                   |         |                     |                     |
	|         | --no-pager                                           |                          |                   |         |                     |                     |
	| delete  | -p docker-flags-068100                               | docker-flags-068100      | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC | 11 Oct 23 19:01 UTC |
	| delete  | -p running-upgrade-051900                            | running-upgrade-051900   | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC | 11 Oct 23 19:01 UTC |
	| start   | -p old-k8s-version-796400                            | old-k8s-version-796400   | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | --memory=2200                                        |                          |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                          |                   |         |                     |                     |
	|         | --kvm-network=default                                |                          |                   |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                          |                   |         |                     |                     |
	|         | --disable-driver-mounts                              |                          |                   |         |                     |                     |
	|         | --keep-context=false                                 |                          |                   |         |                     |                     |
	|         | --driver=docker                                      |                          |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                          |                   |         |                     |                     |
	| start   | -p no-preload-517500                                 | no-preload-517500        | minikube2\jenkins | v1.31.2 | 11 Oct 23 19:01 UTC |                     |
	|         | --memory=2200                                        |                          |                   |         |                     |                     |
	|         | --alsologtostderr                                    |                          |                   |         |                     |                     |
	|         | --wait=true --preload=false                          |                          |                   |         |                     |                     |
	|         | --driver=docker                                      |                          |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                         |                          |                   |         |                     |                     |
	|---------|------------------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/11 19:01:41
	Running on machine: minikube2
	Binary: Built with gc go1.21.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 19:01:41.300843    9448 out.go:296] Setting OutFile to fd 1860 ...
	I1011 19:01:41.301832    9448 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1011 19:01:41.301832    9448 out.go:309] Setting ErrFile to fd 1492...
	I1011 19:01:41.301832    9448 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1011 19:01:41.318835    9448 out.go:303] Setting JSON to false
	I1011 19:01:41.322837    9448 start.go:128] hostinfo: {"hostname":"minikube2","uptime":5012,"bootTime":1697045888,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3570 Build 19045.3570","kernelVersion":"10.0.19045.3570 Build 19045.3570","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1011 19:01:41.322837    9448 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1011 19:01:41.333883    9448 out.go:177] * [no-preload-517500] minikube v1.31.2 on Microsoft Windows 10 Enterprise N 10.0.19045.3570 Build 19045.3570
	I1011 19:01:41.340836    9448 notify.go:220] Checking for updates...
	I1011 19:01:41.344839    9448 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1011 19:01:41.351841    9448 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 19:01:41.358850    9448 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1011 19:01:41.364887    9448 out.go:177]   - MINIKUBE_LOCATION=17402
	I1011 19:01:41.371849    9448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 19:01:40.968814    1140 config.go:182] Loaded profile config "force-systemd-env-769500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1011 19:01:40.969777    1140 config.go:182] Loaded profile config "pause-375900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1011 19:01:40.969777    1140 config.go:182] Loaded profile config "running-upgrade-051900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1011 19:01:40.969777    1140 driver.go:378] Setting default libvirt URI to qemu:///system
	I1011 19:01:41.263869    1140 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.1 (123237)
	I1011 19:01:41.269857    1140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 19:01:41.640851    1140 info.go:266] docker info: {ID:fddc6918-7749-4ebe-a6e7-06311fb56dc1 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:82 SystemTime:2023-10-11 19:01:41.5891175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:
Docker Inc. Version:v0.2.20] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1011 19:01:41.647846    1140 out.go:177] * Using the docker driver based on user configuration
	I1011 19:01:41.376843    9448 config.go:182] Loaded profile config "force-systemd-env-769500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1011 19:01:41.376843    9448 config.go:182] Loaded profile config "pause-375900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1011 19:01:41.377834    9448 driver.go:378] Setting default libvirt URI to qemu:///system
	I1011 19:01:41.687830    9448 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.1 (123237)
	I1011 19:01:41.695835    9448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 19:01:42.110151    9448 info.go:266] docker info: {ID:fddc6918-7749-4ebe-a6e7-06311fb56dc1 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:82 SystemTime:2023-10-11 19:01:42.0547865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:
Docker Inc. Version:v0.2.20] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1011 19:01:42.117308    9448 out.go:177] * Using the docker driver based on user configuration
	I1011 19:01:41.659843    1140 start.go:298] selected driver: docker
	I1011 19:01:41.659843    1140 start.go:902] validating driver "docker" against <nil>
	I1011 19:01:41.659843    1140 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 19:01:41.734842    1140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 19:01:42.124985    1140 info.go:266] docker info: {ID:fddc6918-7749-4ebe-a6e7-06311fb56dc1 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:82 SystemTime:2023-10-11 19:01:42.0721018 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:
Docker Inc. Version:v0.2.20] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1011 19:01:42.124985    1140 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1011 19:01:42.125998    1140 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 19:01:42.132007    1140 out.go:177] * Using Docker Desktop driver with root privileges
	I1011 19:01:42.135987    1140 cni.go:84] Creating CNI manager for ""
	I1011 19:01:42.135987    1140 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1011 19:01:42.135987    1140 start_flags.go:323] config:
	{Name:old-k8s-version-796400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-796400 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1011 19:01:42.141010    1140 out.go:177] * Starting control plane node old-k8s-version-796400 in cluster old-k8s-version-796400
	I1011 19:01:42.147996    1140 cache.go:122] Beginning downloading kic base image for docker with docker
	I1011 19:01:42.153030    1140 out.go:177] * Pulling base image ...
	I1011 19:01:42.161193    1140 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1011 19:01:42.161193    1140 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1011 19:01:42.161820    1140 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1011 19:01:42.161820    1140 cache.go:57] Caching tarball of preloaded images
	I1011 19:01:42.161820    1140 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1011 19:01:42.162365    1140 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1011 19:01:42.162607    1140 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-796400\config.json ...
	I1011 19:01:42.162712    1140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-796400\config.json: {Name:mk6b60692104ca563416dea5167fd5a5170d1dee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:01:42.374079    1140 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1011 19:01:42.374079    1140 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1011 19:01:42.374079    1140 cache.go:195] Successfully downloaded all kic artifacts
	I1011 19:01:42.374079    1140 start.go:365] acquiring machines lock for old-k8s-version-796400: {Name:mkc4efc9d363568ee54213729b0b3cd095a41f46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 19:01:42.374079    1140 start.go:369] acquired machines lock for "old-k8s-version-796400" in 0s
	I1011 19:01:42.374079    1140 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-796400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-796400 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 19:01:42.374079    1140 start.go:125] createHost starting for "" (driver="docker")
	I1011 19:01:42.122984    9448 start.go:298] selected driver: docker
	I1011 19:01:42.122984    9448 start.go:902] validating driver "docker" against <nil>
	I1011 19:01:42.122984    9448 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 19:01:42.185783    9448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 19:01:42.563235    9448 info.go:266] docker info: {ID:fddc6918-7749-4ebe-a6e7-06311fb56dc1 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:82 SystemTime:2023-10-11 19:01:42.5088286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:
Docker Inc. Version:v0.2.20] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1011 19:01:42.563551    9448 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1011 19:01:42.564961    9448 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 19:01:42.569581    9448 out.go:177] * Using Docker Desktop driver with root privileges
	I1011 19:01:42.573883    9448 cni.go:84] Creating CNI manager for ""
	I1011 19:01:42.573883    9448 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 19:01:42.573883    9448 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1011 19:01:42.573883    9448 start_flags.go:323] config:
	{Name:no-preload-517500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-517500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1011 19:01:42.577874    9448 out.go:177] * Starting control plane node no-preload-517500 in cluster no-preload-517500
	I1011 19:01:42.587861    9448 cache.go:122] Beginning downloading kic base image for docker with docker
	I1011 19:01:42.592479    9448 out.go:177] * Pulling base image ...
	I1011 19:01:42.599472    9448 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1011 19:01:42.599472    9448 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1011 19:01:42.599472    9448 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-517500\config.json ...
	I1011 19:01:42.600491    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1011 19:01:42.600491    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.28.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.28.2
	I1011 19:01:42.600491    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.9 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.9
	I1011 19:01:42.600491    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.28.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.28.2
	I1011 19:01:42.600491    9448 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-517500\config.json: {Name:mk429a522dde83a84625c193e3366eef8e10aa93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:01:42.600491    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.5.9-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.9-0
	I1011 19:01:42.600491    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.28.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.28.2
	I1011 19:01:42.600491    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.10.1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.10.1
	I1011 19:01:42.600491    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.28.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.28.2
	I1011 19:01:42.784983    9448 cache.go:107] acquiring lock: {Name:mke142abb3c6a2c41270574b7fb8a623109e608b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 19:01:42.785977    9448 cache.go:107] acquiring lock: {Name:mk4fb1c40f5f6719a0516143715f5e8d99ab233c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 19:01:42.785977    9448 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1011 19:01:42.785977    9448 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.2
	I1011 19:01:42.786985    9448 cache.go:107] acquiring lock: {Name:mk8dec1189f683ead1bd04bb2e1c85005d8ca37f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 19:01:42.786985    9448 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.2
	I1011 19:01:42.788022    9448 cache.go:107] acquiring lock: {Name:mk93ccdec90972c05247bea23df9b97c54ef0291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 19:01:42.788022    9448 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1011 19:01:42.788988    9448 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 188.4959ms
	I1011 19:01:42.788988    9448 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1011 19:01:42.791989    9448 cache.go:107] acquiring lock: {Name:mk9cc05e0ee5270b563134ba1bb3828ae0a31931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 19:01:42.792991    9448 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.2
	I1011 19:01:42.796983    9448 cache.go:107] acquiring lock: {Name:mk47b91a03ce6ebe82951e077a88bdcd37a4e865 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 19:01:42.796983    9448 cache.go:107] acquiring lock: {Name:mk7898ef7d3c0e6a2ac170399020a6163f90b713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 19:01:42.796983    9448 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1011 19:01:42.796983    9448 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1011 19:01:42.797986    9448 cache.go:107] acquiring lock: {Name:mkf3ae7199fe86f09763e1a10cce7a56654c6cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 19:01:42.799153    9448 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1011 19:01:42.801813    9448 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.2
	I1011 19:01:42.801813    9448 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1011 19:01:42.804608    9448 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.2
	I1011 19:01:42.810439    9448 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.2
	I1011 19:01:42.815448    9448 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1011 19:01:42.815448    9448 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1011 19:01:42.819423    9448 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1011 19:01:42.844430    9448 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1011 19:01:42.844430    9448 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1011 19:01:42.844430    9448 cache.go:195] Successfully downloaded all kic artifacts
	I1011 19:01:42.844430    9448 start.go:365] acquiring machines lock for no-preload-517500: {Name:mk805f68fd9169e44d973a163ab8af5ee8839274 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 19:01:42.844430    9448 start.go:369] acquired machines lock for "no-preload-517500" in 0s
	I1011 19:01:42.844430    9448 start.go:93] Provisioning new machine with config: &{Name:no-preload-517500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-517500 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 19:01:42.844430    9448 start.go:125] createHost starting for "" (driver="docker")
	I1011 19:01:39.125342    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:39.125342    5476 retry.go:31] will retry after 851.309614ms: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:39.987825    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:01:40.005219    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:40.005539    5476 retry.go:31] will retry after 1.036481518s: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:41.061763    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:01:41.083786    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:41.083786    5476 retry.go:31] will retry after 1.251967696s: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:42.343088    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:01:42.383079    1140 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1011 19:01:42.383079    1140 start.go:159] libmachine.API.Create for "old-k8s-version-796400" (driver="docker")
	I1011 19:01:42.383079    1140 client.go:168] LocalClient.Create starting
	I1011 19:01:42.384079    1140 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1011 19:01:42.384079    1140 main.go:141] libmachine: Decoding PEM data...
	I1011 19:01:42.384079    1140 main.go:141] libmachine: Parsing certificate...
	I1011 19:01:42.384079    1140 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1011 19:01:42.385081    1140 main.go:141] libmachine: Decoding PEM data...
	I1011 19:01:42.385081    1140 main.go:141] libmachine: Parsing certificate...
	I1011 19:01:42.398076    1140 cli_runner.go:164] Run: docker network inspect old-k8s-version-796400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1011 19:01:42.578866    1140 cli_runner.go:211] docker network inspect old-k8s-version-796400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1011 19:01:42.584868    1140 network_create.go:281] running [docker network inspect old-k8s-version-796400] to gather additional debugging logs...
	I1011 19:01:42.584868    1140 cli_runner.go:164] Run: docker network inspect old-k8s-version-796400
	W1011 19:01:42.812427    1140 cli_runner.go:211] docker network inspect old-k8s-version-796400 returned with exit code 1
	I1011 19:01:42.812427    1140 network_create.go:284] error running [docker network inspect old-k8s-version-796400]: docker network inspect old-k8s-version-796400: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-796400 not found
	I1011 19:01:42.812427    1140 network_create.go:286] output of [docker network inspect old-k8s-version-796400]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-796400 not found
	
	** /stderr **
	I1011 19:01:42.824425    1140 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1011 19:01:43.034418    1140 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1011 19:01:43.065416    1140 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1011 19:01:43.096417    1140 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1011 19:01:43.119420    1140 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002339950}
	I1011 19:01:43.119420    1140 network_create.go:124] attempt to create docker network old-k8s-version-796400 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1011 19:01:43.125424    1140 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-796400 old-k8s-version-796400
	I1011 19:01:42.854419    9448 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1011 19:01:42.854419    9448 start.go:159] libmachine.API.Create for "no-preload-517500" (driver="docker")
	I1011 19:01:42.854419    9448 client.go:168] LocalClient.Create starting
	I1011 19:01:42.855424    9448 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I1011 19:01:42.855424    9448 main.go:141] libmachine: Decoding PEM data...
	I1011 19:01:42.855424    9448 main.go:141] libmachine: Parsing certificate...
	I1011 19:01:42.855424    9448 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I1011 19:01:42.855424    9448 main.go:141] libmachine: Decoding PEM data...
	I1011 19:01:42.856429    9448 main.go:141] libmachine: Parsing certificate...
	I1011 19:01:42.864439    9448 cli_runner.go:164] Run: docker network inspect no-preload-517500 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1011 19:01:42.907432    9448 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.28.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1011 19:01:43.002426    9448 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.28.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1011 19:01:43.034418    9448 cli_runner.go:211] docker network inspect no-preload-517500 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1011 19:01:43.041435    9448 network_create.go:281] running [docker network inspect no-preload-517500] to gather additional debugging logs...
	I1011 19:01:43.041435    9448 cli_runner.go:164] Run: docker network inspect no-preload-517500
	W1011 19:01:43.096417    9448 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.28.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1011 19:01:43.192425    9448 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.28.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1011 19:01:43.207424    9448 cli_runner.go:211] docker network inspect no-preload-517500 returned with exit code 1
	I1011 19:01:43.207424    9448 network_create.go:284] error running [docker network inspect no-preload-517500]: docker network inspect no-preload-517500: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-517500 not found
	I1011 19:01:43.207424    9448 network_create.go:286] output of [docker network inspect no-preload-517500]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-517500 not found
	
	** /stderr **
	I1011 19:01:43.215429    9448 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1011 19:01:43.296426    9448 image.go:187] authn lookup for registry.k8s.io/coredns/coredns:v1.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1011 19:01:43.396477    9448 image.go:187] authn lookup for registry.k8s.io/pause:3.9 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1011 19:01:43.484440    9448 image.go:187] authn lookup for registry.k8s.io/etcd:3.5.9-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1011 19:01:43.591968    9448 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.28.2
	I1011 19:01:43.602262    9448 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.28.2
	I1011 19:01:43.608478    9448 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.28.2
	I1011 19:01:43.638969    9448 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.10.1
	I1011 19:01:43.658470    9448 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.28.2
	I1011 19:01:43.804903    9448 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.9
	I1011 19:01:43.857140    9448 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.9-0
	I1011 19:01:43.924468    9448 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.9 exists
	I1011 19:01:43.924468    9448 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.9" took 1.323971s
	I1011 19:01:43.924468    9448 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.9 succeeded
	I1011 19:01:44.248025    9448 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.10.1 exists
	I1011 19:01:44.248025    9448 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.10.1" took 1.6475262s
	I1011 19:01:44.248025    9448 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.10.1 succeeded
	I1011 19:01:44.508531    9448 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.28.2 exists
	I1011 19:01:44.508531    9448 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.2" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.28.2" took 1.9080315s
	I1011 19:01:44.508531    9448 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.28.2 succeeded
	I1011 19:01:45.264475    9448 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.28.2 exists
	I1011 19:01:45.265170    9448 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.2" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.28.2" took 2.6646672s
	I1011 19:01:45.265240    9448 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.28.2 succeeded
	I1011 19:01:45.553958    9448 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.28.2 exists
	I1011 19:01:45.554434    9448 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.2" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.28.2" took 2.9539295s
	I1011 19:01:45.554434    9448 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.28.2 succeeded
	I1011 19:01:45.795975    9448 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.28.2 exists
	I1011 19:01:45.795975    9448 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.2" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.28.2" took 3.1954694s
	I1011 19:01:45.795975    9448 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.28.2 succeeded
	I1011 19:01:45.901699    9448 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.9-0 exists
	I1011 19:01:45.901894    9448 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.5.9-0" took 3.3013882s
	I1011 19:01:45.901894    9448 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.9-0 succeeded
	I1011 19:01:45.901894    9448 cache.go:87] Successfully saved all images to host disk.
	I1011 19:01:44.361905    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:44.361990    5476 retry.go:31] will retry after 1.517209465s: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:45.884001    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:01:45.918915    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:45.918915    5476 kubeadm.go:611] needs reconfigure: apiserver error: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:01:45.918915    5476 kubeadm.go:1128] stopping kube-system containers ...
	I1011 19:01:45.926573    5476 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1011 19:01:45.985555    5476 docker.go:464] Stopping containers: [3a68e3e25c04 46a29adb775e 8cbb52bc4624 6ae2fd93692b e3fc1b46e1fe 8dd1f463809a ffd4e4805972 0f568baa0e86 7916b8c1fe0a b6ad1cd53788 2940af478bf4 19c537ee4c38 2723c4f657e7 5e12dae82588 c204959f5acc df774251868c 6945e4707c4d 039067db1d5f f6959012a17f 7da3ce2f52d8 ded1b9f0e8c8 92e3eeffa421]
	I1011 19:01:45.991576    5476 ssh_runner.go:195] Run: docker stop 3a68e3e25c04 46a29adb775e 8cbb52bc4624 6ae2fd93692b e3fc1b46e1fe 8dd1f463809a ffd4e4805972 0f568baa0e86 7916b8c1fe0a b6ad1cd53788 2940af478bf4 19c537ee4c38 2723c4f657e7 5e12dae82588 c204959f5acc df774251868c 6945e4707c4d 039067db1d5f f6959012a17f 7da3ce2f52d8 ded1b9f0e8c8 92e3eeffa421
	I1011 19:01:49.417172    1140 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-796400 old-k8s-version-796400: (6.2917196s)
	I1011 19:01:49.417172    1140 network_create.go:108] docker network old-k8s-version-796400 192.168.76.0/24 created
	I1011 19:01:49.417172    1140 kic.go:118] calculated static IP "192.168.76.2" for the "old-k8s-version-796400" container
	I1011 19:01:49.429188    1140 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1011 19:01:49.627666    1140 cli_runner.go:164] Run: docker volume create old-k8s-version-796400 --label name.minikube.sigs.k8s.io=old-k8s-version-796400 --label created_by.minikube.sigs.k8s.io=true
	I1011 19:01:49.825172    1140 oci.go:103] Successfully created a docker volume old-k8s-version-796400
	I1011 19:01:49.830157    1140 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-796400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-796400 --entrypoint /usr/bin/test -v old-k8s-version-796400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1011 19:01:49.307012    9448 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (6.091195s)
	I1011 19:01:49.337488    9448 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1011 19:01:49.369179    9448 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1011 19:01:49.417172    9448 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1011 19:01:49.449183    9448 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1011 19:01:49.485276    9448 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00279ad80}
	I1011 19:01:49.485375    9448 network_create.go:124] attempt to create docker network no-preload-517500 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1011 19:01:49.493584    9448 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-517500 no-preload-517500
	W1011 19:01:49.683382    9448 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-517500 no-preload-517500 returned with exit code 1
	W1011 19:01:49.683462    9448 network_create.go:149] failed to create docker network no-preload-517500 192.168.85.0/24 with gateway 192.168.85.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-517500 no-preload-517500: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1011 19:01:49.683488    9448 network_create.go:116] failed to create docker network no-preload-517500 192.168.85.0/24, will retry: subnet is taken
	I1011 19:01:49.714450    9448 network.go:212] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1011 19:01:49.734862    9448 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00279b590}
	I1011 19:01:49.734862    9448 network_create.go:124] attempt to create docker network no-preload-517500 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1011 19:01:49.740890    9448 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-517500 no-preload-517500
	I1011 19:01:55.444916    9448 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-517500 no-preload-517500: (5.7039999s)
	I1011 19:01:55.444916    9448 network_create.go:108] docker network no-preload-517500 192.168.94.0/24 created
	I1011 19:01:55.444916    9448 kic.go:118] calculated static IP "192.168.94.2" for the "no-preload-517500" container
	I1011 19:01:55.466232    9448 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1011 19:01:55.671344    9448 cli_runner.go:164] Run: docker volume create no-preload-517500 --label name.minikube.sigs.k8s.io=no-preload-517500 --label created_by.minikube.sigs.k8s.io=true
	I1011 19:01:55.916966    9448 oci.go:103] Successfully created a docker volume no-preload-517500
	I1011 19:01:55.924968    9448 cli_runner.go:164] Run: docker run --rm --name no-preload-517500-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-517500 --entrypoint /usr/bin/test -v no-preload-517500:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1011 19:01:57.663053    5476 ssh_runner.go:235] Completed: docker stop 3a68e3e25c04 46a29adb775e 8cbb52bc4624 6ae2fd93692b e3fc1b46e1fe 8dd1f463809a ffd4e4805972 0f568baa0e86 7916b8c1fe0a b6ad1cd53788 2940af478bf4 19c537ee4c38 2723c4f657e7 5e12dae82588 c204959f5acc df774251868c 6945e4707c4d 039067db1d5f f6959012a17f 7da3ce2f52d8 ded1b9f0e8c8 92e3eeffa421: (11.6714234s)
	I1011 19:01:57.678496    5476 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1011 19:01:55.617318    1408 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-769500:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (27.1777474s)
	I1011 19:01:55.617318    1408 kic.go:200] duration metric: took 27.185808 seconds to extract preloaded images to volume
	I1011 19:01:55.624329    1408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 19:01:56.026983    1408 info.go:266] docker info: {ID:fddc6918-7749-4ebe-a6e7-06311fb56dc1 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:true NGoroutines:80 SystemTime:2023-10-11 19:01:55.9726196 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:
Docker Inc. Version:v0.2.20] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1011 19:01:56.033958    1408 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1011 19:01:56.473138    1408 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-769500 --name force-systemd-env-769500 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-769500 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-769500 --network force-systemd-env-769500 --ip 192.168.67.2 --volume force-systemd-env-769500:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1011 19:01:57.958144    1408 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-769500 --name force-systemd-env-769500 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-769500 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-769500 --network force-systemd-env-769500 --ip 192.168.67.2 --volume force-systemd-env-769500:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae: (1.4849993s)
	I1011 19:01:57.970865    1408 cli_runner.go:164] Run: docker container inspect force-systemd-env-769500 --format={{.State.Running}}
	I1011 19:01:57.234958    1140 cli_runner.go:217] Completed: docker run --rm --name old-k8s-version-796400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-796400 --entrypoint /usr/bin/test -v old-k8s-version-796400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib: (7.4047669s)
	I1011 19:01:57.234958    1140 oci.go:107] Successfully prepared a docker volume old-k8s-version-796400
	I1011 19:01:57.234958    1140 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1011 19:01:57.234958    1140 kic.go:191] Starting extracting preloaded images to volume ...
	I1011 19:01:57.240923    1140 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-796400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1011 19:01:58.145307    9448 cli_runner.go:217] Completed: docker run --rm --name no-preload-517500-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-517500 --entrypoint /usr/bin/test -v no-preload-517500:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib: (2.2193272s)
	I1011 19:01:58.145307    9448 oci.go:107] Successfully prepared a docker volume no-preload-517500
	I1011 19:01:58.145307    9448 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1011 19:01:58.155330    9448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 19:01:58.601383    9448 info.go:266] docker info: {ID:fddc6918-7749-4ebe-a6e7-06311fb56dc1 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:true NGoroutines:87 SystemTime:2023-10-11 19:01:58.5322016 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:
Docker Inc. Version:v0.2.20] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1011 19:01:58.610359    9448 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1011 19:01:59.036168    9448 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-517500 --name no-preload-517500 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-517500 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-517500 --network no-preload-517500 --ip 192.168.94.2 --volume no-preload-517500:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1011 19:02:00.315953    9448 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-517500 --name no-preload-517500 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-517500 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-517500 --network no-preload-517500 --ip 192.168.94.2 --volume no-preload-517500:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae: (1.2797419s)
	I1011 19:02:00.326681    9448 cli_runner.go:164] Run: docker container inspect no-preload-517500 --format={{.State.Running}}
	I1011 19:02:00.560578    9448 cli_runner.go:164] Run: docker container inspect no-preload-517500 --format={{.State.Status}}
	I1011 19:02:00.783689    9448 cli_runner.go:164] Run: docker exec no-preload-517500 stat /var/lib/dpkg/alternatives/iptables
	I1011 19:02:01.179436    9448 oci.go:144] the created container "no-preload-517500" has a running status.
	I1011 19:02:01.179436    9448 kic.go:222] Creating ssh key for kic: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\no-preload-517500\id_rsa...
	I1011 19:01:58.092441    5476 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 19:01:58.176305    5476 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Oct 11 19:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Oct 11 19:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Oct 11 19:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Oct 11 19:00 /etc/kubernetes/scheduler.conf
	
	I1011 19:01:58.191317    5476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 19:01:58.275299    5476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 19:01:58.425341    5476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 19:01:58.457331    5476 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1011 19:01:58.473314    5476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 19:01:58.512305    5476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 19:01:58.538307    5476 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1011 19:01:58.550306    5476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 19:01:58.593354    5476 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 19:01:58.619362    5476 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1011 19:01:58.619362    5476 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 19:01:58.844867    5476 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 19:02:00.133319    5476 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.2884459s)
	I1011 19:02:00.133319    5476 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1011 19:02:00.570573    5476 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 19:02:00.768689    5476 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1011 19:02:00.993166    5476 api_server.go:52] waiting for apiserver process to appear ...
	I1011 19:02:01.010109    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:01.162421    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:01.792140    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:02.294577    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:02.805479    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:01:58.200307    1408 cli_runner.go:164] Run: docker container inspect force-systemd-env-769500 --format={{.State.Status}}
	I1011 19:01:58.442311    1408 cli_runner.go:164] Run: docker exec force-systemd-env-769500 stat /var/lib/dpkg/alternatives/iptables
	I1011 19:01:58.807036    1408 oci.go:144] the created container "force-systemd-env-769500" has a running status.
	I1011 19:01:58.807036    1408 kic.go:222] Creating ssh key for kic: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\force-systemd-env-769500\id_rsa...
	I1011 19:01:59.432646    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\force-systemd-env-769500\id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1011 19:01:59.442921    1408 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\force-systemd-env-769500\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1011 19:01:59.711878    1408 cli_runner.go:164] Run: docker container inspect force-systemd-env-769500 --format={{.State.Status}}
	I1011 19:01:59.933686    1408 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1011 19:01:59.934688    1408 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-769500 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1011 19:02:00.258658    1408 kic.go:262] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\force-systemd-env-769500\id_rsa...
	I1011 19:02:01.439339    9448 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\no-preload-517500\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1011 19:02:01.693203    9448 cli_runner.go:164] Run: docker container inspect no-preload-517500 --format={{.State.Status}}
	I1011 19:02:01.958033    9448 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1011 19:02:01.958033    9448 kic_runner.go:114] Args: [docker exec --privileged no-preload-517500 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1011 19:02:02.313534    9448 kic.go:262] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\no-preload-517500\id_rsa...
	I1011 19:02:05.303269    9448 cli_runner.go:164] Run: docker container inspect no-preload-517500 --format={{.State.Status}}
	I1011 19:02:05.500674    9448 machine.go:88] provisioning docker machine ...
	I1011 19:02:05.500674    9448 ubuntu.go:169] provisioning hostname "no-preload-517500"
	I1011 19:02:05.506684    9448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-517500
	I1011 19:02:05.694505    9448 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:05.703505    9448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52842 <nil> <nil>}
	I1011 19:02:05.704506    9448 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-517500 && echo "no-preload-517500" | sudo tee /etc/hostname
	I1011 19:02:05.941504    9448 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-517500
	
	I1011 19:02:05.947487    9448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-517500
	I1011 19:02:06.139106    9448 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:06.140111    9448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52842 <nil> <nil>}
	I1011 19:02:06.140111    9448 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-517500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-517500/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-517500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 19:02:03.297188    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:03.798871    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:04.290450    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:04.565797    5476 api_server.go:72] duration metric: took 3.5726139s to wait for apiserver process to appear ...
	I1011 19:02:04.565797    5476 api_server.go:88] waiting for apiserver healthz status ...
	I1011 19:02:04.565797    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:04.570848    5476 api_server.go:269] stopped: https://127.0.0.1:52534/healthz: Get "https://127.0.0.1:52534/healthz": EOF
	I1011 19:02:04.570848    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:04.574810    5476 api_server.go:269] stopped: https://127.0.0.1:52534/healthz: Get "https://127.0.0.1:52534/healthz": EOF
	I1011 19:02:05.090000    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:03.557738    1408 cli_runner.go:164] Run: docker container inspect force-systemd-env-769500 --format={{.State.Status}}
	I1011 19:02:03.741504    1408 machine.go:88] provisioning docker machine ...
	I1011 19:02:03.741562    1408 ubuntu.go:169] provisioning hostname "force-systemd-env-769500"
	I1011 19:02:03.749615    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-769500
	I1011 19:02:03.956819    1408 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:03.972078    1408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52837 <nil> <nil>}
	I1011 19:02:03.972078    1408 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-769500 && echo "force-systemd-env-769500" | sudo tee /etc/hostname
	I1011 19:02:04.200199    1408 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-769500
	
	I1011 19:02:04.212463    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-769500
	I1011 19:02:04.437484    1408 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:04.438448    1408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52837 <nil> <nil>}
	I1011 19:02:04.438448    1408 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-769500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-769500/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-769500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 19:02:04.645764    1408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 19:02:04.645764    1408 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube2\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube2\minikube-integration\.minikube}
	I1011 19:02:04.645764    1408 ubuntu.go:177] setting up certificates
	I1011 19:02:04.645764    1408 provision.go:83] configureAuth start
	I1011 19:02:04.657680    1408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-769500
	I1011 19:02:04.855615    1408 provision.go:138] copyHostCerts
	I1011 19:02:04.855678    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem
	I1011 19:02:04.855678    1408 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem, removing ...
	I1011 19:02:04.855678    1408 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\key.pem
	I1011 19:02:04.856858    1408 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem (1675 bytes)
	I1011 19:02:04.858329    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem
	I1011 19:02:04.858707    1408 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem, removing ...
	I1011 19:02:04.858802    1408 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.pem
	I1011 19:02:04.859357    1408 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1011 19:02:04.860753    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem
	I1011 19:02:04.861306    1408 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem, removing ...
	I1011 19:02:04.861306    1408 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cert.pem
	I1011 19:02:04.861814    1408 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1011 19:02:04.863602    1408 provision.go:112] generating server cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.force-systemd-env-769500 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube force-systemd-env-769500]
	I1011 19:02:04.988779    1408 provision.go:172] copyRemoteCerts
	I1011 19:02:05.002492    1408 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 19:02:05.012347    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-769500
	I1011 19:02:05.198306    1408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52837 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\force-systemd-env-769500\id_rsa Username:docker}
	I1011 19:02:05.337599    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1011 19:02:05.337599    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 19:02:05.407652    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1011 19:02:05.407652    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1245 bytes)
	I1011 19:02:05.459684    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1011 19:02:05.459684    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 19:02:05.518671    1408 provision.go:86] duration metric: configureAuth took 872.9037ms
	I1011 19:02:05.518671    1408 ubuntu.go:193] setting minikube options for container-runtime
	I1011 19:02:05.518671    1408 config.go:182] Loaded profile config "force-systemd-env-769500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1011 19:02:05.524677    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-769500
	I1011 19:02:05.711506    1408 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:05.711506    1408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52837 <nil> <nil>}
	I1011 19:02:05.711506    1408 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1011 19:02:05.912893    1408 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1011 19:02:05.912893    1408 ubuntu.go:71] root file system type: overlay
	I1011 19:02:05.913432    1408 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1011 19:02:05.920485    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-769500
	I1011 19:02:06.121108    1408 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:06.122103    1408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52837 <nil> <nil>}
	I1011 19:02:06.122103    1408 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1011 19:02:06.349458    1408 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1011 19:02:06.357469    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-769500
	I1011 19:02:06.547733    1408 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:06.547733    1408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52837 <nil> <nil>}
	I1011 19:02:06.547733    1408 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1011 19:02:06.329965    9448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 19:02:06.329965    9448 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube2\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube2\minikube-integration\.minikube}
	I1011 19:02:06.329965    9448 ubuntu.go:177] setting up certificates
	I1011 19:02:06.329965    9448 provision.go:83] configureAuth start
	I1011 19:02:06.336445    9448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-517500
	I1011 19:02:06.528982    9448 provision.go:138] copyHostCerts
	I1011 19:02:06.529475    9448 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem, removing ...
	I1011 19:02:06.529527    9448 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.pem
	I1011 19:02:06.529773    9448 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1011 19:02:06.531481    9448 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem, removing ...
	I1011 19:02:06.531565    9448 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cert.pem
	I1011 19:02:06.532008    9448 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1011 19:02:06.533503    9448 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem, removing ...
	I1011 19:02:06.533503    9448 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\key.pem
	I1011 19:02:06.533804    9448 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem (1675 bytes)
	I1011 19:02:06.534867    9448 provision.go:112] generating server cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.no-preload-517500 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-517500]
	I1011 19:02:06.749859    9448 provision.go:172] copyRemoteCerts
	I1011 19:02:06.766479    9448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 19:02:06.774601    9448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-517500
	I1011 19:02:06.966369    9448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52842 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\no-preload-517500\id_rsa Username:docker}
	I1011 19:02:07.123826    9448 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 19:02:07.181134    9448 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I1011 19:02:07.231309    9448 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 19:02:07.287250    9448 provision.go:86] duration metric: configureAuth took 957.2808ms
	I1011 19:02:07.287343    9448 ubuntu.go:193] setting minikube options for container-runtime
	I1011 19:02:07.287996    9448 config.go:182] Loaded profile config "no-preload-517500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1011 19:02:07.297026    9448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-517500
	I1011 19:02:07.495274    9448 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:07.496720    9448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52842 <nil> <nil>}
	I1011 19:02:07.496720    9448 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1011 19:02:07.695572    9448 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1011 19:02:07.695572    9448 ubuntu.go:71] root file system type: overlay
	I1011 19:02:07.696580    9448 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1011 19:02:07.709561    9448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-517500
	I1011 19:02:07.897756    9448 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:07.898799    9448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52842 <nil> <nil>}
	I1011 19:02:07.898799    9448 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1011 19:02:08.130674    9448 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1011 19:02:08.136507    9448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-517500
	I1011 19:02:08.350233    9448 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:08.351472    9448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52842 <nil> <nil>}
	I1011 19:02:08.351472    9448 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1011 19:02:09.858527    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1011 19:02:09.859067    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1011 19:02:09.859067    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:10.154272    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:10.154272    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:10.154272    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:10.169841    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:10.169841    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:10.585851    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:10.598549    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:10.598549    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:11.075408    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:11.446265    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:11.446265    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:11.580523    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:11.595954    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:11.595954    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:12.086403    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:12.119892    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:12.119892    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:12.588448    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:12.664492    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:12.664492    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:12.393309    1408 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-09-04 12:30:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-11 19:02:06.336215000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1011 19:02:12.393413    1408 machine.go:91] provisioned docker machine in 8.6518111s
	I1011 19:02:12.393413    1408 client.go:171] LocalClient.Create took 47.7096332s
	I1011 19:02:12.393486    1408 start.go:167] duration metric: libmachine.API.Create for "force-systemd-env-769500" took 47.7097058s
	I1011 19:02:12.393556    1408 start.go:300] post-start starting for "force-systemd-env-769500" (driver="docker")
	I1011 19:02:12.393556    1408 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 19:02:12.408935    1408 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 19:02:12.415924    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-769500
	I1011 19:02:12.604426    1408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52837 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\force-systemd-env-769500\id_rsa Username:docker}
	I1011 19:02:12.755561    1408 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 19:02:12.768566    1408 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1011 19:02:12.768566    1408 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1011 19:02:12.768566    1408 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1011 19:02:12.768566    1408 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1011 19:02:12.768566    1408 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\addons for local assets ...
	I1011 19:02:12.769567    1408 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\files for local assets ...
	I1011 19:02:12.770613    1408 filesync.go:149] local asset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\15562.pem -> 15562.pem in /etc/ssl/certs
	I1011 19:02:12.770613    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\15562.pem -> /etc/ssl/certs/15562.pem
	I1011 19:02:12.786545    1408 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 19:02:12.809747    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\15562.pem --> /etc/ssl/certs/15562.pem (1708 bytes)
	I1011 19:02:12.861908    1408 start.go:303] post-start completed in 468.3504ms
	I1011 19:02:12.877896    1408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-769500
	I1011 19:02:13.077337    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:13.164113    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:13.164113    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:13.583348    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:13.664216    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:13.664216    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:14.088164    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:14.153505    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1011 19:02:14.153505    5476 api_server.go:103] status: https://127.0.0.1:52534/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1011 19:02:14.589228    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:14.601220    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 200:
	ok
	I1011 19:02:14.619236    5476 api_server.go:141] control plane version: v1.28.2
	I1011 19:02:14.619236    5476 api_server.go:131] duration metric: took 10.0533926s to wait for apiserver health ...
	I1011 19:02:14.619236    5476 cni.go:84] Creating CNI manager for ""
	I1011 19:02:14.619236    5476 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 19:02:14.622242    5476 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1011 19:02:12.493431    9448 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-09-04 12:30:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-11 19:02:08.116215000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1011 19:02:12.493431    9448 machine.go:91] provisioned docker machine in 6.9927243s
	I1011 19:02:12.493431    9448 client.go:171] LocalClient.Create took 29.6378708s
	I1011 19:02:12.493431    9448 start.go:167] duration metric: libmachine.API.Create for "no-preload-517500" took 29.6388754s
	I1011 19:02:12.493431    9448 start.go:300] post-start starting for "no-preload-517500" (driver="docker")
	I1011 19:02:12.493431    9448 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 19:02:12.508441    9448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 19:02:12.516434    9448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-517500
	I1011 19:02:12.698424    9448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52842 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\no-preload-517500\id_rsa Username:docker}
	I1011 19:02:12.848744    9448 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 19:02:12.861081    9448 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1011 19:02:12.861383    9448 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1011 19:02:12.861383    9448 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1011 19:02:12.861383    9448 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1011 19:02:12.861383    9448 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\addons for local assets ...
	I1011 19:02:12.861908    9448 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\files for local assets ...
	I1011 19:02:12.862884    9448 filesync.go:149] local asset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\15562.pem -> 15562.pem in /etc/ssl/certs
	I1011 19:02:12.879894    9448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 19:02:12.901872    9448 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\15562.pem --> /etc/ssl/certs/15562.pem (1708 bytes)
	I1011 19:02:12.959018    9448 start.go:303] post-start completed in 465.5855ms
	I1011 19:02:12.974995    9448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-517500
	I1011 19:02:13.171132    9448 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-517500\config.json ...
	I1011 19:02:13.186096    9448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 19:02:13.193108    9448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-517500
	I1011 19:02:13.378586    9448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52842 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\no-preload-517500\id_rsa Username:docker}
	I1011 19:02:13.525338    9448 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1011 19:02:13.536340    9448 start.go:128] duration metric: createHost completed in 30.6917691s
	I1011 19:02:13.536340    9448 start.go:83] releasing machines lock for "no-preload-517500", held for 30.6917691s
	I1011 19:02:13.542335    9448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-517500
	I1011 19:02:13.743208    9448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 19:02:13.749221    9448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-517500
	I1011 19:02:13.752232    9448 ssh_runner.go:195] Run: cat /version.json
	I1011 19:02:13.764231    9448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-517500
	I1011 19:02:13.946513    9448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52842 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\no-preload-517500\id_rsa Username:docker}
	I1011 19:02:13.961148    9448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52842 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\no-preload-517500\id_rsa Username:docker}
	I1011 19:02:14.471719    9448 ssh_runner.go:195] Run: systemctl --version
	I1011 19:02:14.502233    9448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1011 19:02:14.528221    9448 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W1011 19:02:14.549242    9448 start.go:416] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I1011 19:02:14.561221    9448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 19:02:14.641236    9448 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 19:02:14.641236    9448 start.go:472] detecting cgroup driver to use...
	I1011 19:02:14.641236    9448 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1011 19:02:14.641236    9448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 19:02:14.691220    9448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1011 19:02:14.725217    9448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1011 19:02:14.748226    9448 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1011 19:02:14.762665    9448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1011 19:02:14.810957    9448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1011 19:02:14.843554    9448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1011 19:02:14.881201    9448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1011 19:02:14.918191    9448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 19:02:14.947211    9448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1011 19:02:14.987676    9448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 19:02:15.019662    9448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 19:02:15.050684    9448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 19:02:15.314353    9448 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1011 19:02:15.525031    9448 start.go:472] detecting cgroup driver to use...
	I1011 19:02:15.525031    9448 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1011 19:02:15.534018    9448 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1011 19:02:15.562019    9448 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1011 19:02:15.572065    9448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1011 19:02:15.599017    9448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 19:02:15.698229    9448 ssh_runner.go:195] Run: which cri-dockerd
	I1011 19:02:15.727094    9448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1011 19:02:15.756123    9448 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1011 19:02:15.815360    9448 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1011 19:02:16.020625    9448 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1011 19:02:16.194824    9448 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1011 19:02:16.194824    9448 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1011 19:02:16.245886    9448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 19:02:14.633210    5476 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1011 19:02:14.655235    5476 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1011 19:02:14.699224    5476 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 19:02:14.714223    5476 system_pods.go:59] 6 kube-system pods found
	I1011 19:02:14.714223    5476 system_pods.go:61] "coredns-5dd5756b68-g2h9s" [6626c9fe-763e-46b0-a66a-5bd39e157d8d] Running
	I1011 19:02:14.714223    5476 system_pods.go:61] "etcd-pause-375900" [4bf74444-af82-451c-b1d3-36e322aebe0b] Running
	I1011 19:02:14.714223    5476 system_pods.go:61] "kube-apiserver-pause-375900" [4b6ca595-2579-4609-972b-3d352dbc9971] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1011 19:02:14.714223    5476 system_pods.go:61] "kube-controller-manager-pause-375900" [b6f72b4d-30c2-4679-9634-612b1e81dc5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1011 19:02:14.714223    5476 system_pods.go:61] "kube-proxy-6wv6x" [86829575-b97b-4960-a459-934aecb00dd5] Running
	I1011 19:02:14.714223    5476 system_pods.go:61] "kube-scheduler-pause-375900" [e9b9a50f-2ab2-414b-a8a4-51708cdfb4d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1011 19:02:14.714223    5476 system_pods.go:74] duration metric: took 14.9987ms to wait for pod list to return data ...
	I1011 19:02:14.714223    5476 node_conditions.go:102] verifying NodePressure condition ...
	I1011 19:02:14.748226    5476 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I1011 19:02:14.748226    5476 node_conditions.go:123] node cpu capacity is 16
	I1011 19:02:14.748226    5476 node_conditions.go:105] duration metric: took 34.0027ms to run NodePressure ...
	I1011 19:02:14.748226    5476 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1011 19:02:15.356351    5476 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1011 19:02:15.367367    5476 kubeadm.go:787] kubelet initialised
	I1011 19:02:15.367367    5476 kubeadm.go:788] duration metric: took 11.0154ms waiting for restarted kubelet to initialise ...
	I1011 19:02:15.367367    5476 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 19:02:15.379351    5476 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-g2h9s" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:15.395368    5476 pod_ready.go:92] pod "coredns-5dd5756b68-g2h9s" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:15.395368    5476 pod_ready.go:81] duration metric: took 16.0168ms waiting for pod "coredns-5dd5756b68-g2h9s" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:15.395368    5476 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:16.763545    5476 pod_ready.go:92] pod "etcd-pause-375900" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:16.763615    5476 pod_ready.go:81] duration metric: took 1.368241s waiting for pod "etcd-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:16.763615    5476 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:16.778930    5476 pod_ready.go:92] pod "kube-apiserver-pause-375900" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:16.778930    5476 pod_ready.go:81] duration metric: took 15.2525ms waiting for pod "kube-apiserver-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:16.778930    5476 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:13.077790    1408 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\config.json ...
	I1011 19:02:13.103710    1408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 19:02:13.109487    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-769500
	I1011 19:02:13.298717    1408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52837 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\force-systemd-env-769500\id_rsa Username:docker}
	I1011 19:02:13.430222    1408 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1011 19:02:13.440812    1408 start.go:128] duration metric: createHost completed in 48.7620324s
	I1011 19:02:13.440812    1408 start.go:83] releasing machines lock for "force-systemd-env-769500", held for 48.7630068s
	I1011 19:02:13.449225    1408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-769500
	I1011 19:02:13.633199    1408 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 19:02:13.645400    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-769500
	I1011 19:02:13.650613    1408 ssh_runner.go:195] Run: cat /version.json
	I1011 19:02:13.665226    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-769500
	I1011 19:02:13.854232    1408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52837 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\force-systemd-env-769500\id_rsa Username:docker}
	I1011 19:02:13.882211    1408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52837 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\force-systemd-env-769500\id_rsa Username:docker}
	I1011 19:02:14.198512    1408 ssh_runner.go:195] Run: systemctl --version
	I1011 19:02:14.218498    1408 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1011 19:02:14.238509    1408 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W1011 19:02:14.263514    1408 start.go:416] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I1011 19:02:14.273493    1408 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 19:02:14.343499    1408 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 19:02:14.343499    1408 start.go:472] detecting cgroup driver to use...
	I1011 19:02:14.343499    1408 start.go:476] using "systemd" cgroup driver as enforced via flags
	I1011 19:02:14.343499    1408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 19:02:14.406896    1408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1011 19:02:14.442043    1408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1011 19:02:14.475547    1408 containerd.go:145] configuring containerd to use "systemd" as cgroup driver...
	I1011 19:02:14.492761    1408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1011 19:02:14.530250    1408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1011 19:02:14.563215    1408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1011 19:02:14.605219    1408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1011 19:02:14.645245    1408 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 19:02:14.681216    1408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1011 19:02:14.717223    1408 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 19:02:14.751220    1408 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 19:02:14.798124    1408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 19:02:14.971208    1408 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1011 19:02:15.164717    1408 start.go:472] detecting cgroup driver to use...
	I1011 19:02:15.164717    1408 start.go:476] using "systemd" cgroup driver as enforced via flags
	I1011 19:02:15.176716    1408 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1011 19:02:15.209682    1408 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1011 19:02:15.221687    1408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1011 19:02:15.356351    1408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 19:02:15.463266    1408 ssh_runner.go:195] Run: which cri-dockerd
	I1011 19:02:15.484252    1408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1011 19:02:15.511759    1408 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1011 19:02:15.563029    1408 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1011 19:02:15.768366    1408 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1011 19:02:15.914983    1408 docker.go:555] configuring docker to use "systemd" as cgroup driver...
	I1011 19:02:15.914983    1408 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1011 19:02:15.971601    1408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 19:02:16.138182    1408 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1011 19:02:17.838191    1408 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.700001s)
	I1011 19:02:17.847190    1408 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1011 19:02:18.019858    1408 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1011 19:02:18.203364    1408 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1011 19:02:18.407702    1408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 19:02:18.529279    1408 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1011 19:02:18.629649    1408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 19:02:18.829632    1408 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1011 19:02:18.997377    1408 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1011 19:02:19.011416    1408 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1011 19:02:19.022407    1408 start.go:540] Will wait 60s for crictl version
	I1011 19:02:19.033405    1408 ssh_runner.go:195] Run: which crictl
	I1011 19:02:19.057421    1408 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 19:02:19.177113    1408 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1011 19:02:19.187100    1408 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1011 19:02:19.260275    1408 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1011 19:02:16.407929    9448 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1011 19:02:17.937731    9448 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.3708348s)
	I1011 19:02:17.953724    9448 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1011 19:02:18.127752    9448 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1011 19:02:18.341424    9448 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1011 19:02:18.519233    9448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 19:02:18.758375    9448 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1011 19:02:18.824612    9448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 19:02:19.028405    9448 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1011 19:02:19.214094    9448 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1011 19:02:19.227123    9448 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1011 19:02:19.245091    9448 start.go:540] Will wait 60s for crictl version
	I1011 19:02:19.262097    9448 ssh_runner.go:195] Run: which crictl
	I1011 19:02:19.290092    9448 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 19:02:19.472022    9448 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1011 19:02:19.480031    9448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1011 19:02:19.557572    9448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1011 19:02:16.943578    1140 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-796400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (19.7023803s)
	I1011 19:02:16.943655    1140 kic.go:200] duration metric: took 19.708607 seconds to extract preloaded images to volume
	I1011 19:02:16.951389    1140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 19:02:17.354443    1140 info.go:266] docker info: {ID:fddc6918-7749-4ebe-a6e7-06311fb56dc1 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:84 OomKillDisable:true NGoroutines:80 SystemTime:2023-10-11 19:02:17.2906278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:
Docker Inc. Version:v0.2.20] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1011 19:02:17.363806    1140 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1011 19:02:17.754374    1140 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-796400 --name old-k8s-version-796400 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-796400 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-796400 --network old-k8s-version-796400 --ip 192.168.76.2 --volume old-k8s-version-796400:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1011 19:02:18.723373    1140 cli_runner.go:164] Run: docker container inspect old-k8s-version-796400 --format={{.State.Running}}
	I1011 19:02:18.916831    1140 cli_runner.go:164] Run: docker container inspect old-k8s-version-796400 --format={{.State.Status}}
	I1011 19:02:19.134099    1140 cli_runner.go:164] Run: docker exec old-k8s-version-796400 stat /var/lib/dpkg/alternatives/iptables
	I1011 19:02:19.487031    1140 oci.go:144] the created container "old-k8s-version-796400" has a running status.
	I1011 19:02:19.487031    1140 kic.go:222] Creating ssh key for kic: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\old-k8s-version-796400\id_rsa...
	I1011 19:02:19.702811    1140 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\old-k8s-version-796400\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1011 19:02:19.948809    1140 cli_runner.go:164] Run: docker container inspect old-k8s-version-796400 --format={{.State.Status}}
	I1011 19:02:20.202629    1140 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1011 19:02:20.202629    1140 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-796400 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1011 19:02:20.523631    1140 kic.go:262] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\old-k8s-version-796400\id_rsa...
	I1011 19:02:19.691809    9448 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1011 19:02:19.699804    9448 cli_runner.go:164] Run: docker exec -t no-preload-517500 dig +short host.docker.internal
	I1011 19:02:20.096828    9448 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1011 19:02:20.109846    9448 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1011 19:02:20.119846    9448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 19:02:20.160499    9448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-517500
	I1011 19:02:20.363649    9448 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1011 19:02:20.373644    9448 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1011 19:02:20.420653    9448 docker.go:689] Got preloaded images: 
	I1011 19:02:20.420653    9448 docker.go:695] registry.k8s.io/kube-apiserver:v1.28.2 wasn't preloaded
	I1011 19:02:20.420653    9448 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.2 registry.k8s.io/kube-controller-manager:v1.28.2 registry.k8s.io/kube-scheduler:v1.28.2 registry.k8s.io/kube-proxy:v1.28.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1011 19:02:20.431643    9448 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 19:02:20.434642    9448 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1011 19:02:20.440641    9448 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1011 19:02:20.442649    9448 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1011 19:02:20.444650    9448 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 19:02:20.446645    9448 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1011 19:02:20.446645    9448 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.2
	I1011 19:02:20.446645    9448 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.2
	I1011 19:02:20.447640    9448 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.2
	I1011 19:02:20.452660    9448 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1011 19:02:20.453639    9448 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1011 19:02:20.457653    9448 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1011 19:02:20.458646    9448 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.2
	I1011 19:02:20.460651    9448 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.2
	I1011 19:02:20.460651    9448 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1011 19:02:20.467640    9448 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.2
	W1011 19:02:20.552028    9448 image.go:187] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1011 19:02:20.646946    9448 image.go:187] authn lookup for registry.k8s.io/pause:3.9 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1011 19:02:20.753950    9448 image.go:187] authn lookup for registry.k8s.io/etcd:3.5.9-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1011 19:02:20.857934    9448 image.go:187] authn lookup for registry.k8s.io/coredns/coredns:v1.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1011 19:02:20.867321    9448 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 19:02:20.919783    9448 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1011 19:02:20.919864    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1011 19:02:20.919957    9448 docker.go:318] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 19:02:20.929001    9448 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	W1011 19:02:20.966803    9448 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.28.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1011 19:02:20.999799    9448 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1011 19:02:21.014773    9448 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1011 19:02:21.025812    9448 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1011 19:02:21.025812    9448 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1011 19:02:21.067782    9448 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	W1011 19:02:21.080810    9448 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.28.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1011 19:02:21.123815    9448 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1011 19:02:21.166853    9448 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.2
	I1011 19:02:21.172772    9448 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	W1011 19:02:21.207810    9448 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.28.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1011 19:02:21.295024    9448 cache_images.go:116] "registry.k8s.io/pause:3.9" needs transfer: "registry.k8s.io/pause:3.9" does not exist at hash "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" in container runtime
	I1011 19:02:21.295024    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.9 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.9
	I1011 19:02:21.295024    9448 docker.go:318] Removing image: registry.k8s.io/pause:3.9
	I1011 19:02:18.873656    5476 pod_ready.go:102] pod "kube-controller-manager-pause-375900" in "kube-system" namespace has status "Ready":"False"
	I1011 19:02:21.371026    5476 pod_ready.go:92] pod "kube-controller-manager-pause-375900" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:21.371026    5476 pod_ready.go:81] duration metric: took 4.5920747s waiting for pod "kube-controller-manager-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.371026    5476 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6wv6x" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.391043    5476 pod_ready.go:92] pod "kube-proxy-6wv6x" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:21.391043    5476 pod_ready.go:81] duration metric: took 20.0173ms waiting for pod "kube-proxy-6wv6x" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.391043    5476 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.412018    5476 pod_ready.go:92] pod "kube-scheduler-pause-375900" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:21.412018    5476 pod_ready.go:81] duration metric: took 20.9746ms waiting for pod "kube-scheduler-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.412018    5476 pod_ready.go:38] duration metric: took 6.0446238s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 19:02:21.412018    5476 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 19:02:21.432014    5476 ops.go:34] apiserver oom_adj: -16
	I1011 19:02:21.432014    5476 kubeadm.go:640] restartCluster took 50.1559392s
	I1011 19:02:21.432014    5476 kubeadm.go:406] StartCluster complete in 50.7571735s
	I1011 19:02:21.432014    5476 settings.go:142] acquiring lock: {Name:mk9684611c6005d251a6ecf406b4611c2c1e30f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:02:21.432014    5476 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1011 19:02:21.433012    5476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\kubeconfig: {Name:mk7e72b8b9c82f9d87d6aed6af6962a1c1fa489d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:02:21.434012    5476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1011 19:02:21.434012    5476 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1011 19:02:21.440018    5476 out.go:177] * Enabled addons: 
	I1011 19:02:21.435016    5476 config.go:182] Loaded profile config "pause-375900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1011 19:02:21.444015    5476 addons.go:502] enable addons completed in 10.0033ms: enabled=[]
	I1011 19:02:21.448019    5476 kapi.go:59] client config for pause-375900: &rest.Config{Host:"https://127.0.0.1:52534", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\pause-375900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\pause-375900\\client.key", CAFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e44dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1011 19:02:21.457026    5476 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-375900" context rescaled to 1 replicas
	I1011 19:02:21.457026    5476 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1011 19:02:21.462035    5476 out.go:177] * Verifying Kubernetes components...
	I1011 19:02:21.479022    5476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 19:02:21.616332    5476 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1011 19:02:21.627333    5476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-375900
	I1011 19:02:21.839597    5476 node_ready.go:35] waiting up to 6m0s for node "pause-375900" to be "Ready" ...
	I1011 19:02:21.850416    5476 node_ready.go:49] node "pause-375900" has status "Ready":"True"
	I1011 19:02:21.850509    5476 node_ready.go:38] duration metric: took 10.7886ms waiting for node "pause-375900" to be "Ready" ...
	I1011 19:02:21.850509    5476 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 19:02:21.864611    5476 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-g2h9s" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.885624    5476 pod_ready.go:92] pod "coredns-5dd5756b68-g2h9s" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:21.885624    5476 pod_ready.go:81] duration metric: took 21.0126ms waiting for pod "coredns-5dd5756b68-g2h9s" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.885624    5476 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.898610    5476 pod_ready.go:92] pod "etcd-pause-375900" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:21.898610    5476 pod_ready.go:81] duration metric: took 12.986ms waiting for pod "etcd-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:21.898610    5476 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:22.179285    5476 pod_ready.go:92] pod "kube-apiserver-pause-375900" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:22.179285    5476 pod_ready.go:81] duration metric: took 280.6738ms waiting for pod "kube-apiserver-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:22.179285    5476 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:22.565413    5476 pod_ready.go:92] pod "kube-controller-manager-pause-375900" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:22.566410    5476 pod_ready.go:81] duration metric: took 387.1239ms waiting for pod "kube-controller-manager-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:22.566410    5476 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6wv6x" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:22.977552    5476 pod_ready.go:92] pod "kube-proxy-6wv6x" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:22.977552    5476 pod_ready.go:81] duration metric: took 411.1397ms waiting for pod "kube-proxy-6wv6x" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:22.977552    5476 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:19.324121    1408 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1011 19:02:19.333093    1408 cli_runner.go:164] Run: docker exec -t force-systemd-env-769500 dig +short host.docker.internal
	I1011 19:02:19.706802    1408 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1011 19:02:19.719798    1408 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1011 19:02:19.730811    1408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 19:02:19.761836    1408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" force-systemd-env-769500
	I1011 19:02:19.973803    1408 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1011 19:02:19.979857    1408 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1011 19:02:20.059255    1408 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1011 19:02:20.059348    1408 docker.go:619] Images already preloaded, skipping extraction
	I1011 19:02:20.072832    1408 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1011 19:02:20.114837    1408 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1011 19:02:20.114837    1408 cache_images.go:84] Images are preloaded, skipping loading
	I1011 19:02:20.123846    1408 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1011 19:02:20.245645    1408 cni.go:84] Creating CNI manager for ""
	I1011 19:02:20.245645    1408 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 19:02:20.246635    1408 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1011 19:02:20.246635    1408 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-769500 NodeName:force-systemd-env-769500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 19:02:20.246635    1408 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-env-769500"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 19:02:20.246635    1408 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=force-systemd-env-769500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:force-systemd-env-769500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1011 19:02:20.261644    1408 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1011 19:02:20.287645    1408 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 19:02:20.298639    1408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 19:02:20.322644    1408 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I1011 19:02:20.366632    1408 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 19:02:20.411638    1408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1011 19:02:20.515633    1408 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1011 19:02:20.525633    1408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 19:02:20.549889    1408 certs.go:56] Setting up C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500 for IP: 192.168.67.2
	I1011 19:02:20.549889    1408 certs.go:190] acquiring lock for shared ca certs: {Name:mka39b35711ce17aa627001b408a7adb2f266bbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:02:20.550970    1408 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key
	I1011 19:02:20.551718    1408 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key
	I1011 19:02:20.552601    1408 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\client.key
	I1011 19:02:20.552816    1408 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\client.crt with IP's: []
	I1011 19:02:20.739871    1408 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\client.crt ...
	I1011 19:02:20.739871    1408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\client.crt: {Name:mk7e160493cd718464216202185387ebafe0343a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:02:20.740844    1408 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\client.key ...
	I1011 19:02:20.740844    1408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\client.key: {Name:mk14484344be0356993d971268ab9d92dc8f8bf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:02:20.741860    1408 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.key.c7fa3a9e
	I1011 19:02:20.741860    1408 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1011 19:02:20.878316    1408 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.crt.c7fa3a9e ...
	I1011 19:02:20.878316    1408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.crt.c7fa3a9e: {Name:mkf56219099746704f9edb9f435708c2c5620049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:02:20.880312    1408 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.key.c7fa3a9e ...
	I1011 19:02:20.880312    1408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.key.c7fa3a9e: {Name:mkdc13a86b7046d42aa4b045c535f8512dba25dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:02:20.881325    1408 certs.go:337] copying C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.crt.c7fa3a9e -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.crt
	I1011 19:02:20.891318    1408 certs.go:341] copying C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.key.c7fa3a9e -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.key
	I1011 19:02:20.893317    1408 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\proxy-client.key
	I1011 19:02:20.893317    1408 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\proxy-client.crt with IP's: []
	I1011 19:02:20.989783    1408 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\proxy-client.crt ...
	I1011 19:02:20.989783    1408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\proxy-client.crt: {Name:mkeb355b0f0485fea8521ac40fda9fa4bcefbb0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:02:20.991793    1408 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\proxy-client.key ...
	I1011 19:02:20.991793    1408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\proxy-client.key: {Name:mk973b1dedc5375b99f3f30719f8e07f18894466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 19:02:20.992788    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1011 19:02:20.992788    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1011 19:02:20.992788    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1011 19:02:21.004774    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1011 19:02:21.005779    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1011 19:02:21.005779    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1011 19:02:21.006822    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1011 19:02:21.006822    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1011 19:02:21.006822    1408 certs.go:437] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\1556.pem (1338 bytes)
	W1011 19:02:21.007777    1408 certs.go:433] ignoring C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\1556_empty.pem, impossibly tiny 0 bytes
	I1011 19:02:21.007777    1408 certs.go:437] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1011 19:02:21.007777    1408 certs.go:437] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1011 19:02:21.007777    1408 certs.go:437] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1011 19:02:21.008781    1408 certs.go:437] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1011 19:02:21.008781    1408 certs.go:437] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\15562.pem (1708 bytes)
	I1011 19:02:21.008781    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1011 19:02:21.008781    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\1556.pem -> /usr/share/ca-certificates/1556.pem
	I1011 19:02:21.009777    1408 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\15562.pem -> /usr/share/ca-certificates/15562.pem
	I1011 19:02:21.010770    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1011 19:02:21.086785    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 19:02:21.170823    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 19:02:21.249036    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\force-systemd-env-769500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1011 19:02:21.330027    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 19:02:21.415043    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 19:02:21.479022    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 19:02:21.548122    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 19:02:21.612371    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 19:02:21.676316    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\1556.pem --> /usr/share/ca-certificates/1556.pem (1338 bytes)
	I1011 19:02:21.730308    1408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\15562.pem --> /usr/share/ca-certificates/15562.pem (1708 bytes)
	I1011 19:02:21.801344    1408 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 19:02:21.870639    1408 ssh_runner.go:195] Run: openssl version
	I1011 19:02:21.899610    1408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15562.pem && ln -fs /usr/share/ca-certificates/15562.pem /etc/ssl/certs/15562.pem"
	I1011 19:02:21.930604    1408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15562.pem
	I1011 19:02:21.946415    1408 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 11 18:04 /usr/share/ca-certificates/15562.pem
	I1011 19:02:21.964235    1408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15562.pem
	I1011 19:02:21.995242    1408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15562.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 19:02:22.037729    1408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 19:02:22.080633    1408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 19:02:22.091848    1408 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 11 17:53 /usr/share/ca-certificates/minikubeCA.pem
	I1011 19:02:22.104222    1408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 19:02:22.126224    1408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 19:02:22.166477    1408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1556.pem && ln -fs /usr/share/ca-certificates/1556.pem /etc/ssl/certs/1556.pem"
	I1011 19:02:22.212099    1408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1556.pem
	I1011 19:02:22.225233    1408 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 11 18:04 /usr/share/ca-certificates/1556.pem
	I1011 19:02:22.239222    1408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1556.pem
	I1011 19:02:22.274635    1408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1556.pem /etc/ssl/certs/51391683.0"
	I1011 19:02:22.318999    1408 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1011 19:02:22.332403    1408 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1011 19:02:22.332830    1408 kubeadm.go:404] StartCluster: {Name:force-systemd-env-769500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-env-769500 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1011 19:02:22.341515    1408 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1011 19:02:22.406910    1408 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 19:02:22.444799    1408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 19:02:22.466211    1408 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1011 19:02:22.475397    1408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 19:02:22.498852    1408 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 19:02:22.499001    1408 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1011 19:02:22.713218    1408 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1011 19:02:22.896720    1408 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 19:02:23.368906    5476 pod_ready.go:92] pod "kube-scheduler-pause-375900" in "kube-system" namespace has status "Ready":"True"
	I1011 19:02:23.369009    5476 pod_ready.go:81] duration metric: took 391.4553ms waiting for pod "kube-scheduler-pause-375900" in "kube-system" namespace to be "Ready" ...
	I1011 19:02:23.369009    5476 pod_ready.go:38] duration metric: took 1.5184932s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 19:02:23.369072    5476 api_server.go:52] waiting for apiserver process to appear ...
	I1011 19:02:23.381249    5476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 19:02:23.423242    5476 api_server.go:72] duration metric: took 1.9662072s to wait for apiserver process to appear ...
	I1011 19:02:23.423242    5476 api_server.go:88] waiting for apiserver healthz status ...
	I1011 19:02:23.423242    5476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52534/healthz ...
	I1011 19:02:23.438236    5476 api_server.go:279] https://127.0.0.1:52534/healthz returned 200:
	ok
	I1011 19:02:23.444880    5476 api_server.go:141] control plane version: v1.28.2
	I1011 19:02:23.445145    5476 api_server.go:131] duration metric: took 21.9033ms to wait for apiserver health ...
	I1011 19:02:23.445251    5476 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 19:02:23.580252    5476 system_pods.go:59] 6 kube-system pods found
	I1011 19:02:23.580252    5476 system_pods.go:61] "coredns-5dd5756b68-g2h9s" [6626c9fe-763e-46b0-a66a-5bd39e157d8d] Running
	I1011 19:02:23.580252    5476 system_pods.go:61] "etcd-pause-375900" [4bf74444-af82-451c-b1d3-36e322aebe0b] Running
	I1011 19:02:23.580252    5476 system_pods.go:61] "kube-apiserver-pause-375900" [4b6ca595-2579-4609-972b-3d352dbc9971] Running
	I1011 19:02:23.580252    5476 system_pods.go:61] "kube-controller-manager-pause-375900" [b6f72b4d-30c2-4679-9634-612b1e81dc5d] Running
	I1011 19:02:23.580252    5476 system_pods.go:61] "kube-proxy-6wv6x" [86829575-b97b-4960-a459-934aecb00dd5] Running
	I1011 19:02:23.580252    5476 system_pods.go:61] "kube-scheduler-pause-375900" [e9b9a50f-2ab2-414b-a8a4-51708cdfb4d4] Running
	I1011 19:02:23.580252    5476 system_pods.go:74] duration metric: took 134.9669ms to wait for pod list to return data ...
	I1011 19:02:23.580252    5476 default_sa.go:34] waiting for default service account to be created ...
	I1011 19:02:23.767255    5476 default_sa.go:45] found service account: "default"
	I1011 19:02:23.768271    5476 default_sa.go:55] duration metric: took 186.9941ms for default service account to be created ...
	I1011 19:02:23.768271    5476 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 19:02:23.974250    5476 system_pods.go:86] 6 kube-system pods found
	I1011 19:02:23.974250    5476 system_pods.go:89] "coredns-5dd5756b68-g2h9s" [6626c9fe-763e-46b0-a66a-5bd39e157d8d] Running
	I1011 19:02:23.974250    5476 system_pods.go:89] "etcd-pause-375900" [4bf74444-af82-451c-b1d3-36e322aebe0b] Running
	I1011 19:02:23.974250    5476 system_pods.go:89] "kube-apiserver-pause-375900" [4b6ca595-2579-4609-972b-3d352dbc9971] Running
	I1011 19:02:23.974250    5476 system_pods.go:89] "kube-controller-manager-pause-375900" [b6f72b4d-30c2-4679-9634-612b1e81dc5d] Running
	I1011 19:02:23.974250    5476 system_pods.go:89] "kube-proxy-6wv6x" [86829575-b97b-4960-a459-934aecb00dd5] Running
	I1011 19:02:23.974250    5476 system_pods.go:89] "kube-scheduler-pause-375900" [e9b9a50f-2ab2-414b-a8a4-51708cdfb4d4] Running
	I1011 19:02:23.974250    5476 system_pods.go:126] duration metric: took 205.9779ms to wait for k8s-apps to be running ...
	I1011 19:02:23.974250    5476 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 19:02:23.987285    5476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 19:02:24.018267    5476 system_svc.go:56] duration metric: took 44.017ms WaitForService to wait for kubelet.
	I1011 19:02:24.018267    5476 kubeadm.go:581] duration metric: took 2.5612295s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1011 19:02:24.018267    5476 node_conditions.go:102] verifying NodePressure condition ...
	I1011 19:02:24.176265    5476 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I1011 19:02:24.177257    5476 node_conditions.go:123] node cpu capacity is 16
	I1011 19:02:24.177257    5476 node_conditions.go:105] duration metric: took 158.9893ms to run NodePressure ...
	I1011 19:02:24.177257    5476 start.go:228] waiting for startup goroutines ...
	I1011 19:02:24.177257    5476 start.go:233] waiting for cluster config update ...
	I1011 19:02:24.177257    5476 start.go:242] writing updated cluster config ...
	I1011 19:02:24.194287    5476 ssh_runner.go:195] Run: rm -f paused
	I1011 19:02:24.357690    5476 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1011 19:02:24.361706    5476 out.go:177] * Done! kubectl is now configured to use "pause-375900" cluster and "default" namespace by default
	I1011 19:02:23.740247    1140 cli_runner.go:164] Run: docker container inspect old-k8s-version-796400 --format={{.State.Status}}
	I1011 19:02:23.930252    1140 machine.go:88] provisioning docker machine ...
	I1011 19:02:23.930252    1140 ubuntu.go:169] provisioning hostname "old-k8s-version-796400"
	I1011 19:02:23.938270    1140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-796400
	I1011 19:02:24.157247    1140 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:24.167269    1140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52899 <nil> <nil>}
	I1011 19:02:24.167269    1140 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-796400 && echo "old-k8s-version-796400" | sudo tee /etc/hostname
	I1011 19:02:24.403693    1140 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-796400
	
	I1011 19:02:24.412705    1140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-796400
	I1011 19:02:24.631461    1140 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:24.631461    1140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52899 <nil> <nil>}
	I1011 19:02:24.632457    1140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-796400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-796400/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-796400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 19:02:24.852463    1140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 19:02:24.852463    1140 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube2\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube2\minikube-integration\.minikube}
	I1011 19:02:24.853487    1140 ubuntu.go:177] setting up certificates
	I1011 19:02:24.853487    1140 provision.go:83] configureAuth start
	I1011 19:02:24.862458    1140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-796400
	I1011 19:02:25.089469    1140 provision.go:138] copyHostCerts
	I1011 19:02:25.090462    1140 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem, removing ...
	I1011 19:02:25.090462    1140 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.pem
	I1011 19:02:25.090462    1140 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1011 19:02:25.092454    1140 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem, removing ...
	I1011 19:02:25.092454    1140 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cert.pem
	I1011 19:02:25.092454    1140 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1011 19:02:25.094469    1140 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem, removing ...
	I1011 19:02:25.094469    1140 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\key.pem
	I1011 19:02:25.094469    1140 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem (1675 bytes)
	I1011 19:02:25.096467    1140 provision.go:112] generating server cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.old-k8s-version-796400 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-796400]
	I1011 19:02:25.320462    1140 provision.go:172] copyRemoteCerts
	I1011 19:02:25.339467    1140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 19:02:25.350452    1140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-796400
	I1011 19:02:25.583053    1140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52899 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\old-k8s-version-796400\id_rsa Username:docker}
	I1011 19:02:25.724696    1140 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1011 19:02:25.805938    1140 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I1011 19:02:25.867938    1140 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 19:02:21.306019    9448 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.9
	W1011 19:02:21.335032    9448 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.28.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1011 19:02:21.352027    9448 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1011 19:02:21.352027    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.5.9-0 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.9-0
	I1011 19:02:21.352027    9448 docker.go:318] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1011 19:02:21.356033    9448 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.2
	I1011 19:02:21.364027    9448 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.9-0
	I1011 19:02:21.411020    9448 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.2" does not exist at hash "cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce" in container runtime
	I1011 19:02:21.411020    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.28.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.28.2
	I1011 19:02:21.411020    9448 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1011 19:02:21.411020    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.10.1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.10.1
	I1011 19:02:21.411020    9448 docker.go:318] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1011 19:02:21.411020    9448 docker.go:318] Removing image: registry.k8s.io/kube-apiserver:v1.28.2
	I1011 19:02:21.422033    9448 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.28.2
	I1011 19:02:21.425016    9448 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1011 19:02:21.471041    9448 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.2
	I1011 19:02:21.557671    9448 docker.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1011 19:02:21.557671    9448 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1011 19:02:21.659335    9448 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.9
	I1011 19:02:21.671318    9448 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.2
	I1011 19:02:21.676316    9448 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.9-0
	I1011 19:02:21.676316    9448 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.2" needs transfer: "registry.k8s.io/kube-proxy:v1.28.2" does not exist at hash "c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0" in container runtime
	I1011 19:02:21.676316    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.28.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.28.2
	I1011 19:02:21.676316    9448 docker.go:318] Removing image: registry.k8s.io/kube-proxy:v1.28.2
	I1011 19:02:21.683329    9448 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9
	I1011 19:02:21.689360    9448 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.28.2
	I1011 19:02:21.694317    9448 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1011 19:02:21.754323    9448 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.28.2
	I1011 19:02:21.755322    9448 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.10.1
	I1011 19:02:21.755322    9448 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.2" does not exist at hash "55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57" in container runtime
	I1011 19:02:21.755322    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.28.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.28.2
	I1011 19:02:21.755322    9448 docker.go:318] Removing image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1011 19:02:21.772320    9448 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.28.2
	I1011 19:02:21.775329    9448 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.2
	I1011 19:02:21.779346    9448 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1011 19:02:23.457374    9448 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load": (1.8995711s)
	I1011 19:02:23.457374    9448 ssh_runner.go:235] Completed: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.2: (1.786048s)
	I1011 19:02:23.457458    9448 cache_images.go:315] Transferred and loaded C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 from cache
	I1011 19:02:23.457458    9448 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.2" does not exist at hash "7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8" in container runtime
	I1011 19:02:23.457564    9448 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.28.2 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.28.2
	I1011 19:02:23.457564    9448 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9: (1.7742266s)
	I1011 19:02:23.457564    9448 docker.go:318] Removing image: registry.k8s.io/kube-scheduler:v1.28.2
	I1011 19:02:23.457564    9448 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.9: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.9': No such file or directory
	I1011 19:02:23.457674    9448 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: (1.7633488s)
	I1011 19:02:23.457823    9448 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.9-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.9-0': No such file or directory
	I1011 19:02:23.457823    9448 ssh_runner.go:235] Completed: docker rmi registry.k8s.io/kube-proxy:v1.28.2: (1.7684551s)
	I1011 19:02:23.457823    9448 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.9 --> /var/lib/minikube/images/pause_3.9 (322048 bytes)
	I1011 19:02:23.457823    9448 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.28.2
	I1011 19:02:23.457823    9448 ssh_runner.go:235] Completed: docker rmi registry.k8s.io/kube-controller-manager:v1.28.2: (1.6854957s)
	I1011 19:02:23.457823    9448 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.9-0 --> /var/lib/minikube/images/etcd_3.5.9-0 (102902784 bytes)
	I1011 19:02:23.457823    9448 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.2: (1.6824867s)
	I1011 19:02:23.457823    9448 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: (1.6784689s)
	I1011 19:02:23.457823    9448 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.28.2: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.28.2': No such file or directory
	I1011 19:02:23.457823    9448 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.28.2
	I1011 19:02:23.457823    9448 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.10.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.10.1': No such file or directory
	I1011 19:02:23.457823    9448 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.28.2 --> /var/lib/minikube/images/kube-apiserver_v1.28.2 (34671104 bytes)
	I1011 19:02:23.457823    9448 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.10.1 --> /var/lib/minikube/images/coredns_v1.10.1 (16193024 bytes)
	I1011 19:02:23.472767    9448 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.28.2
	I1011 19:02:23.481764    9448 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.2
	I1011 19:02:23.482747    9448 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.2
	I1011 19:02:23.624270    9448 docker.go:285] Loading image: /var/lib/minikube/images/pause_3.9
	I1011 19:02:23.624270    9448 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.9 | docker load"
	I1011 19:02:23.771253    9448 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.28.2
	I1011 19:02:23.771253    9448 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.28.2: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.28.2': No such file or directory
	I1011 19:02:23.771253    9448 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.28.2: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.28.2': No such file or directory
	I1011 19:02:23.771253    9448 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.28.2 --> /var/lib/minikube/images/kube-proxy_v1.28.2 (24561152 bytes)
	I1011 19:02:23.772258    9448 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.28.2 --> /var/lib/minikube/images/kube-controller-manager_v1.28.2 (33403392 bytes)
	I1011 19:02:23.786264    9448 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.2
	I1011 19:02:24.166262    9448 cache_images.go:315] Transferred and loaded C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.9 from cache
	I1011 19:02:24.336689    9448 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.28.2: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.28.2': No such file or directory
	I1011 19:02:24.336689    9448 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.28.2 --> /var/lib/minikube/images/kube-scheduler_v1.28.2 (18819072 bytes)
	I1011 19:02:25.933930    1140 provision.go:86] duration metric: configureAuth took 1.0804373s
	I1011 19:02:25.933930    1140 ubuntu.go:193] setting minikube options for container-runtime
	I1011 19:02:25.934944    1140 config.go:182] Loaded profile config "old-k8s-version-796400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1011 19:02:25.944945    1140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-796400
	I1011 19:02:26.158950    1140 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:26.159944    1140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52899 <nil> <nil>}
	I1011 19:02:26.159944    1140 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1011 19:02:26.362970    1140 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1011 19:02:26.362970    1140 ubuntu.go:71] root file system type: overlay
	I1011 19:02:26.362970    1140 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1011 19:02:26.372933    1140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-796400
	I1011 19:02:26.567943    1140 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:26.568942    1140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52899 <nil> <nil>}
	I1011 19:02:26.568942    1140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1011 19:02:26.825235    1140 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1011 19:02:26.833239    1140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-796400
	I1011 19:02:27.049792    1140 main.go:141] libmachine: Using SSH client type: native
	I1011 19:02:27.051756    1140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa543a0] 0xa56ee0 <nil>  [] 0s} 127.0.0.1 52899 <nil> <nil>}
	I1011 19:02:27.051756    1140 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1011 19:02:26.609936    9448 docker.go:285] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1011 19:02:26.609936    9448 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.10.1 | docker load"
	I1011 19:02:30.674129    9448 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.10.1 | docker load": (4.0641737s)
	I1011 19:02:30.674129    9448 cache_images.go:315] Transferred and loaded C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.10.1 from cache
	I1011 19:02:30.674129    9448 docker.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.2
	I1011 19:02:30.674129    9448 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.28.2 | docker load"
	I1011 19:02:31.572768    1140 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-09-04 12:30:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-11 19:02:26.809686000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1011 19:02:31.572768    1140 machine.go:91] provisioned docker machine in 7.642481s
	I1011 19:02:31.572768    1140 client.go:171] LocalClient.Create took 49.1894626s
	I1011 19:02:31.572768    1140 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-796400" took 49.1894626s
	I1011 19:02:31.572768    1140 start.go:300] post-start starting for "old-k8s-version-796400" (driver="docker")
	I1011 19:02:31.572768    1140 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 19:02:31.587807    1140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 19:02:31.594766    1140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-796400
	I1011 19:02:31.822730    1140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52899 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\old-k8s-version-796400\id_rsa Username:docker}
	I1011 19:02:31.992721    1140 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 19:02:32.003743    1140 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1011 19:02:32.004734    1140 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1011 19:02:32.004734    1140 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1011 19:02:32.004734    1140 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1011 19:02:32.004734    1140 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\addons for local assets ...
	I1011 19:02:32.004734    1140 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\files for local assets ...
	I1011 19:02:32.005755    1140 filesync.go:149] local asset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\15562.pem -> 15562.pem in /etc/ssl/certs
	I1011 19:02:32.022720    1140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 19:02:32.053736    1140 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\15562.pem --> /etc/ssl/certs/15562.pem (1708 bytes)
	I1011 19:02:32.118762    1140 start.go:303] post-start completed in 545.9909ms
	I1011 19:02:32.131745    1140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-796400
	I1011 19:02:32.357778    1140 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-796400\config.json ...
	I1011 19:02:32.383361    1140 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 19:02:32.395345    1140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-796400
	I1011 19:02:32.642351    1140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52899 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\old-k8s-version-796400\id_rsa Username:docker}
	I1011 19:02:32.808355    1140 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1011 19:02:32.821345    1140 start.go:128] duration metric: createHost completed in 50.4470336s
	I1011 19:02:32.821345    1140 start.go:83] releasing machines lock for "old-k8s-version-796400", held for 50.4470336s
	I1011 19:02:32.829365    1140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-796400
	I1011 19:02:33.092165    1140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 19:02:33.104156    1140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-796400
	I1011 19:02:33.111152    1140 ssh_runner.go:195] Run: cat /version.json
	I1011 19:02:33.126145    1140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-796400
	I1011 19:02:33.374565    1140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52899 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\old-k8s-version-796400\id_rsa Username:docker}
	I1011 19:02:33.398581    1140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52899 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\old-k8s-version-796400\id_rsa Username:docker}
	I1011 19:02:33.527973    1140 ssh_runner.go:195] Run: systemctl --version
	I1011 19:02:33.786887    1140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1011 19:02:33.823886    1140 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W1011 19:02:33.847873    1140 start.go:416] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I1011 19:02:33.864877    1140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1011 19:02:33.922882    1140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1011 19:02:33.960919    1140 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1011 19:02:33.961887    1140 start.go:472] detecting cgroup driver to use...
	I1011 19:02:33.961887    1140 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1011 19:02:33.961887    1140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 19:02:34.015895    1140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I1011 19:02:34.048900    1140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1011 19:02:34.080044    1140 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1011 19:02:34.093067    1140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1011 19:02:34.135581    1140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1011 19:02:34.181480    1140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1011 19:02:34.215500    1140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1011 19:02:34.259120    1140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 19:02:34.311154    1140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1011 19:02:34.355593    1140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 19:02:34.398582    1140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 19:02:34.433586    1140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 19:02:34.636420    1140 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1011 19:02:34.834850    1140 start.go:472] detecting cgroup driver to use...
	I1011 19:02:34.835842    1140 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1011 19:02:34.847843    1140 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1011 19:02:34.886850    1140 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1011 19:02:34.899862    1140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1011 19:02:34.927917    1140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 19:02:35.011858    1140 ssh_runner.go:195] Run: which cri-dockerd
	I1011 19:02:35.031838    1140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1011 19:02:35.057871    1140 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1011 19:02:35.127839    1140 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1011 19:02:35.369509    1140 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1011 19:02:35.613497    1140 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1011 19:02:35.614488    1140 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1011 19:02:35.674498    1140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 19:02:35.857481    1140 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1011 19:02:36.636780    1140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1011 19:02:36.706825    1140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	
	* 
	* ==> Docker <==
	* Oct 11 19:01:28 pause-375900 cri-dockerd[4721]: time="2023-10-11T19:01:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7916b8c1fe0aa6349aa0a4e51327a2a7e7cc9d3a9c5c5de1d970b178194b6639/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 11 19:01:29 pause-375900 cri-dockerd[4721]: time="2023-10-11T19:01:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ffd4e4805972d18c06ba5637dbb2cb043af8cf8c9f9541426e785f8506a8061f/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 11 19:01:50 pause-375900 dockerd[4423]: time="2023-10-11T19:01:49.853844800Z" level=info msg="ignoring event" container=7916b8c1fe0aa6349aa0a4e51327a2a7e7cc9d3a9c5c5de1d970b178194b6639 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 19:01:50 pause-375900 dockerd[4423]: time="2023-10-11T19:01:49.853951200Z" level=info msg="ignoring event" container=46a29adb775e99ffbf85df2d1c7e1564cf011606f7d8f3055af3f7bd2f1327b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 19:01:50 pause-375900 dockerd[4423]: time="2023-10-11T19:01:49.854369600Z" level=info msg="ignoring event" container=2940af478bf4f0d96c255cb080794af6cfa35118bc2fd460bd91eb44bbabf19d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 19:01:50 pause-375900 dockerd[4423]: time="2023-10-11T19:01:49.855162500Z" level=info msg="ignoring event" container=19c537ee4c384da75f24c7695517673aaa6bbe7ef82f1c4791bf1338dc6c124f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 19:01:50 pause-375900 dockerd[4423]: time="2023-10-11T19:01:49.855292500Z" level=info msg="ignoring event" container=8cbb52bc46249598b5d0846ba76d5b82ac189d6a8362c3aaa09c50f640c678bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 19:01:50 pause-375900 dockerd[4423]: time="2023-10-11T19:01:49.855349900Z" level=info msg="ignoring event" container=b6ad1cd537881c4275d5e1aeb80f0b61c2c1f5a1c34765b2dffc8ab4e5465ecc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 19:01:50 pause-375900 dockerd[4423]: time="2023-10-11T19:01:49.855460900Z" level=info msg="ignoring event" container=e3fc1b46e1feedc3f1e31488df9ea2030aa650e2ca70dadd726e3bc614213b11 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 19:01:50 pause-375900 dockerd[4423]: time="2023-10-11T19:01:49.855518000Z" level=info msg="ignoring event" container=0f568baa0e8688720f159eca7cc486067efb673e31ab3aa267f2581a3f3842ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 19:01:50 pause-375900 dockerd[4423]: time="2023-10-11T19:01:49.863396100Z" level=info msg="ignoring event" container=ffd4e4805972d18c06ba5637dbb2cb043af8cf8c9f9541426e785f8506a8061f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 19:01:54 pause-375900 dockerd[4423]: time="2023-10-11T19:01:54.592521900Z" level=info msg="ignoring event" container=3a68e3e25c04267c77aff941e6b65fa079c9bfbcb4e408574e9594e032f6b4a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 19:01:55 pause-375900 dockerd[4423]: time="2023-10-11T19:01:55.097221000Z" level=info msg="ignoring event" container=6ae2fd93692b8ac56e11dc9dbac636d2c64214f49b410d0ef89f3e98a90c7a32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 19:01:56 pause-375900 cri-dockerd[4721]: time="2023-10-11T19:01:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2d5e3814ef9fe0069183c8d86b1a394a216aebda74eae5572439ee31ed7fd05f/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 11 19:01:57 pause-375900 cri-dockerd[4721]: time="2023-10-11T19:01:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6e37524766f7f807fe9be8a2ec7961cd222babe24ff720a64ad931d73e84f6a6/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 11 19:01:57 pause-375900 cri-dockerd[4721]: time="2023-10-11T19:01:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe93555e1943cfc4d8097c71b544f8cbbcc9a69b55af4e8e4a58b8155f6818a8/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 11 19:01:57 pause-375900 cri-dockerd[4721]: time="2023-10-11T19:01:57Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-g2h9s_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ffd4e4805972d18c06ba5637dbb2cb043af8cf8c9f9541426e785f8506a8061f\""
	Oct 11 19:01:57 pause-375900 dockerd[4423]: time="2023-10-11T19:01:57.586088700Z" level=info msg="ignoring event" container=8dd1f463809addd6a8a63a2a72be57a1fdca6b45a38620652840ce6ef61759dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 11 19:01:58 pause-375900 cri-dockerd[4721]: time="2023-10-11T19:01:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/303f8995cf3b6418ab9bbf8ac498ab20563a3ccfdfb4dcc62faa4223b24db15f/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 11 19:01:58 pause-375900 cri-dockerd[4721]: W1011 19:01:58.154485    4721 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Oct 11 19:01:58 pause-375900 cri-dockerd[4721]: time="2023-10-11T19:01:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1581ccc65d1ea76b43ccc98d5cf98f80319874055dffb204e146483f8d0b8000/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 11 19:01:58 pause-375900 cri-dockerd[4721]: time="2023-10-11T19:01:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/34b9fcffa3e832cab32b574b136fe77e072b9919a4f6eb2c12e006f057fce350/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 11 19:01:58 pause-375900 cri-dockerd[4721]: W1011 19:01:58.383541    4721 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Oct 11 19:01:58 pause-375900 cri-dockerd[4721]: W1011 19:01:58.460598    4721 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Oct 11 19:02:11 pause-375900 cri-dockerd[4721]: time="2023-10-11T19:02:11Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b93772cb7b694       ead0a4a53df89       28 seconds ago       Running             coredns                   2                   1581ccc65d1ea       coredns-5dd5756b68-g2h9s
	19db40dfaf81c       c120fed2beb84       28 seconds ago       Running             kube-proxy                2                   2d5e3814ef9fe       kube-proxy-6wv6x
	3aa4b3b7ee948       55f13c92defb1       37 seconds ago       Running             kube-controller-manager   2                   6e37524766f7f       kube-controller-manager-pause-375900
	87b2c2f958e07       cdcab12b2dd16       37 seconds ago       Running             kube-apiserver            2                   34b9fcffa3e83       kube-apiserver-pause-375900
	846b58faec370       73deb9a3f7025       37 seconds ago       Running             etcd                      2                   303f8995cf3b6       etcd-pause-375900
	29048537c048b       7a5d9d67a13f6       37 seconds ago       Running             kube-scheduler            2                   fe93555e1943c       kube-scheduler-pause-375900
	3a68e3e25c042       ead0a4a53df89       About a minute ago   Exited              coredns                   1                   ffd4e4805972d       coredns-5dd5756b68-g2h9s
	46a29adb775e9       55f13c92defb1       About a minute ago   Exited              kube-controller-manager   1                   0f568baa0e868       kube-controller-manager-pause-375900
	8cbb52bc46249       7a5d9d67a13f6       About a minute ago   Exited              kube-scheduler            1                   7916b8c1fe0aa       kube-scheduler-pause-375900
	6ae2fd93692b8       73deb9a3f7025       About a minute ago   Exited              etcd                      1                   b6ad1cd537881       etcd-pause-375900
	e3fc1b46e1fee       c120fed2beb84       About a minute ago   Exited              kube-proxy                1                   19c537ee4c384       kube-proxy-6wv6x
	8dd1f463809ad       cdcab12b2dd16       About a minute ago   Exited              kube-apiserver            1                   2940af478bf4f       kube-apiserver-pause-375900
	
	* 
	* ==> coredns [3a68e3e25c04] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54493 - 37301 "HINFO IN 8378913592617583348.8010908138851535645. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.055823s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [b93772cb7b69] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37703 - 54270 "HINFO IN 5490492823263349566.3533110061362470414. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0806851s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-375900
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-375900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91587593de480e6b788546c040ff38fdb52a5106
	                    minikube.k8s.io/name=pause-375900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_11T19_00_30_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 11 Oct 2023 19:00:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-375900
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 11 Oct 2023 19:02:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 11 Oct 2023 19:02:11 +0000   Wed, 11 Oct 2023 19:00:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 11 Oct 2023 19:02:11 +0000   Wed, 11 Oct 2023 19:00:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 11 Oct 2023 19:02:11 +0000   Wed, 11 Oct 2023 19:00:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 11 Oct 2023 19:02:11 +0000   Wed, 11 Oct 2023 19:00:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-375900
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 b51ce203fd724b97a2c9f7c2c29a9e54
	  System UUID:                b51ce203fd724b97a2c9f7c2c29a9e54
	  Boot ID:                    210bf8b0-efd3-412e-9dae-f952437eab55
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-g2h9s                100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     119s
	  kube-system                 etcd-pause-375900                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         2m14s
	  kube-system                 kube-apiserver-pause-375900             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m14s
	  kube-system                 kube-controller-manager-pause-375900    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m14s
	  kube-system                 kube-proxy-6wv6x                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kube-scheduler-pause-375900             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 115s                   kube-proxy       
	  Normal  Starting                 25s                    kube-proxy       
	  Normal  Starting                 60s                    kube-proxy       
	  Normal  Starting                 2m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m30s (x7 over 2m36s)  kubelet          Node pause-375900 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m25s (x8 over 2m36s)  kubelet          Node pause-375900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m25s (x8 over 2m36s)  kubelet          Node pause-375900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s                  kubelet          Node pause-375900 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m11s                  kubelet          Node pause-375900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m11s                  kubelet          Node pause-375900 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m11s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m                     node-controller  Node pause-375900 event: Registered Node pause-375900 in Controller
	  Normal  Starting                 38s                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  38s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  37s (x8 over 38s)      kubelet          Node pause-375900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 38s)      kubelet          Node pause-375900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x7 over 38s)      kubelet          Node pause-375900 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14s                    node-controller  Node pause-375900 event: Registered Node pause-375900 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct11 18:31] WSL2: Performing memory compaction.
	[Oct11 18:33] WSL2: Performing memory compaction.
	[Oct11 18:34] WSL2: Performing memory compaction.
	[Oct11 18:36] WSL2: Performing memory compaction.
	[Oct11 18:37] WSL2: Performing memory compaction.
	[Oct11 18:38] WSL2: Performing memory compaction.
	[Oct11 18:39] WSL2: Performing memory compaction.
	[Oct11 18:41] WSL2: Performing memory compaction.
	[Oct11 18:42] WSL2: Performing memory compaction.
	[Oct11 18:43] WSL2: Performing memory compaction.
	[Oct11 18:44] WSL2: Performing memory compaction.
	[Oct11 18:46] WSL2: Performing memory compaction.
	[Oct11 18:47] WSL2: Performing memory compaction.
	[Oct11 18:48] WSL2: Performing memory compaction.
	[Oct11 18:49] WSL2: Performing memory compaction.
	[Oct11 18:50] WSL2: Performing memory compaction.
	[Oct11 18:51] WSL2: Performing memory compaction.
	[Oct11 18:53] WSL2: Performing memory compaction.
	[Oct11 18:54] WSL2: Performing memory compaction.
	[  +8.722576] process 'docker/tmp/qemu-check600830759/check' started with executable stack
	[Oct11 18:55] WSL2: Performing memory compaction.
	[Oct11 18:56] WSL2: Performing memory compaction.
	[Oct11 18:58] WSL2: Performing memory compaction.
	[Oct11 19:00] WSL2: Performing memory compaction.
	[Oct11 19:01] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [6ae2fd93692b] <==
	* {"level":"info","ts":"2023-10-11T19:01:49.236971Z","caller":"traceutil/trace.go:171","msg":"trace[563901894] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:420; }","duration":"125.072ms","start":"2023-10-11T19:01:49.111876Z","end":"2023-10-11T19:01:49.236948Z","steps":["trace[563901894] 'agreement among raft nodes before linearized reading'  (duration: 122.8334ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-11T19:01:49.236988Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-11T19:01:48.330904Z","time spent":"906.0642ms","remote":"127.0.0.1:55918","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2023-10-11T19:01:49.237184Z","caller":"traceutil/trace.go:171","msg":"trace[427941167] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-375900; range_end:; response_count:1; response_revision:420; }","duration":"2.6343543s","start":"2023-10-11T19:01:46.602784Z","end":"2023-10-11T19:01:49.237138Z","steps":["trace[427941167] 'agreement among raft nodes before linearized reading'  (duration: 2.6319072s)"],"step_count":1}
	{"level":"warn","ts":"2023-10-11T19:01:49.237346Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-11T19:01:46.602768Z","time spent":"2.6345622s","remote":"127.0.0.1:55930","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5215,"request content":"key:\"/registry/pods/kube-system/etcd-pause-375900\" "}
	{"level":"info","ts":"2023-10-11T19:01:49.237434Z","caller":"traceutil/trace.go:171","msg":"trace[1462386019] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:420; }","duration":"5.5658794s","start":"2023-10-11T19:01:43.67153Z","end":"2023-10-11T19:01:49.237409Z","steps":["trace[1462386019] 'agreement among raft nodes before linearized reading'  (duration: 5.5632348s)"],"step_count":1}
	{"level":"info","ts":"2023-10-11T19:01:49.34445Z","caller":"traceutil/trace.go:171","msg":"trace[696375484] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"102.6359ms","start":"2023-10-11T19:01:49.241787Z","end":"2023-10-11T19:01:49.344423Z","steps":["trace[696375484] 'process raft request'  (duration: 91.8263ms)","trace[696375484] 'compare'  (duration: 10.4733ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-11T19:01:49.452367Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-11T19:01:49.452588Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-375900","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"info","ts":"2023-10-11T19:01:49.452785Z","caller":"traceutil/trace.go:171","msg":"trace[29268100] linearizableReadLoop","detail":"{readStateIndex:451; appliedIndex:448; }","duration":"104.3291ms","start":"2023-10-11T19:01:49.34841Z","end":"2023-10-11T19:01:49.452739Z","steps":["trace[29268100] 'read index received'  (duration: 103.9734ms)","trace[29268100] 'applied index is now lower than readState.Index'  (duration: 352.5µs)"],"step_count":2}
	{"level":"warn","ts":"2023-10-11T19:01:49.452943Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-11T19:01:49.453131Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-11T19:01:49.453185Z","caller":"traceutil/trace.go:171","msg":"trace[176465660] transaction","detail":"{read_only:false; response_revision:422; number_of_response:1; }","duration":"205.3632ms","start":"2023-10-11T19:01:49.247806Z","end":"2023-10-11T19:01:49.453169Z","steps":["trace[176465660] 'process raft request'  (duration: 204.6517ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-11T19:01:49.453323Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-11T19:01:49.453485Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-11T19:01:49.453535Z","caller":"v3rpc/watch.go:473","msg":"failed to send watch response to gRPC stream","error":"rpc error: code = Unavailable desc = transport is closing"}
	{"level":"info","ts":"2023-10-11T19:01:49.453611Z","caller":"traceutil/trace.go:171","msg":"trace[1172837418] transaction","detail":"{read_only:false; response_revision:423; number_of_response:1; }","duration":"191.3387ms","start":"2023-10-11T19:01:49.262258Z","end":"2023-10-11T19:01:49.453596Z","steps":["trace[1172837418] 'process raft request'  (duration: 190.3479ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-11T19:01:49.453675Z","caller":"traceutil/trace.go:171","msg":"trace[1618179482] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"176.1741ms","start":"2023-10-11T19:01:49.27749Z","end":"2023-10-11T19:01:49.453665Z","steps":["trace[1618179482] 'process raft request'  (duration: 175.1833ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-11T19:01:49.453802Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.3892ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:basic-user\" ","response":"range_response_count:1 size:678"}
	{"level":"info","ts":"2023-10-11T19:01:49.453912Z","caller":"traceutil/trace.go:171","msg":"trace[1847343109] range","detail":"{range_begin:/registry/clusterroles/system:basic-user; range_end:; response_count:1; response_revision:424; }","duration":"105.5135ms","start":"2023-10-11T19:01:49.348384Z","end":"2023-10-11T19:01:49.453897Z","steps":["trace[1847343109] 'agreement among raft nodes before linearized reading'  (duration: 105.3295ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-11T19:01:49.454027Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.7519ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/kube-apiserver-pause-375900.178d22c66f462f9c\" ","response":"range_response_count:1 size:851"}
	{"level":"info","ts":"2023-10-11T19:01:49.454065Z","caller":"traceutil/trace.go:171","msg":"trace[1063206093] range","detail":"{range_begin:/registry/events/kube-system/kube-apiserver-pause-375900.178d22c66f462f9c; range_end:; response_count:1; response_revision:424; }","duration":"103.7914ms","start":"2023-10-11T19:01:49.35026Z","end":"2023-10-11T19:01:49.454052Z","steps":["trace[1063206093] 'agreement among raft nodes before linearized reading'  (duration: 103.7106ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-11T19:01:49.560555Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2023-10-11T19:01:55.068638Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-10-11T19:01:55.069918Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-10-11T19:01:55.070061Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-375900","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	* 
	* ==> etcd [846b58faec37] <==
	* {"level":"info","ts":"2023-10-11T19:02:12.117067Z","caller":"traceutil/trace.go:171","msg":"trace[2134008590] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"645.8142ms","start":"2023-10-11T19:02:11.471224Z","end":"2023-10-11T19:02:12.117039Z","steps":["trace[2134008590] 'process raft request'  (duration: 531.3311ms)","trace[2134008590] 'compare'  (duration: 112.7161ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-11T19:02:12.117279Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"632.0826ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" ","response":"range_response_count:1 size:3021"}
	{"level":"warn","ts":"2023-10-11T19:02:12.117323Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-11T19:02:11.471136Z","time spent":"646.0808ms","remote":"127.0.0.1:57002","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4388,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-375900\" mod_revision:420 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-375900\" value_size:4326 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-375900\" > >"}
	{"level":"info","ts":"2023-10-11T19:02:12.117357Z","caller":"traceutil/trace.go:171","msg":"trace[1356316456] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:1; response_revision:428; }","duration":"632.1738ms","start":"2023-10-11T19:02:11.48516Z","end":"2023-10-11T19:02:12.117334Z","steps":["trace[1356316456] 'agreement among raft nodes before linearized reading'  (duration: 631.9492ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-11T19:02:12.117378Z","caller":"traceutil/trace.go:171","msg":"trace[345143402] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"623.2183ms","start":"2023-10-11T19:02:11.494139Z","end":"2023-10-11T19:02:12.117358Z","steps":["trace[345143402] 'process raft request'  (duration: 622.3908ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-11T19:02:12.11741Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-11T19:02:11.485149Z","time spent":"632.2456ms","remote":"127.0.0.1:56982","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":1,"response size":3044,"request content":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" "}
	{"level":"warn","ts":"2023-10-11T19:02:12.117498Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-11T19:02:11.494122Z","time spent":"623.2937ms","remote":"127.0.0.1:57000","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4375,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/pause-375900\" mod_revision:368 > success:<request_put:<key:\"/registry/minions/pause-375900\" value_size:4337 >> failure:<request_range:<key:\"/registry/minions/pause-375900\" > >"}
	{"level":"info","ts":"2023-10-11T19:02:13.35551Z","caller":"traceutil/trace.go:171","msg":"trace[1838844536] linearizableReadLoop","detail":"{readStateIndex:479; appliedIndex:478; }","duration":"100.2773ms","start":"2023-10-11T19:02:13.255205Z","end":"2023-10-11T19:02:13.355482Z","steps":["trace[1838844536] 'read index received'  (duration: 19.0497ms)","trace[1838844536] 'applied index is now lower than readState.Index'  (duration: 81.2242ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-11T19:02:13.355633Z","caller":"traceutil/trace.go:171","msg":"trace[914169653] transaction","detail":"{read_only:false; response_revision:448; number_of_response:1; }","duration":"176.1766ms","start":"2023-10-11T19:02:13.17936Z","end":"2023-10-11T19:02:13.355542Z","steps":["trace[914169653] 'process raft request'  (duration: 94.949ms)","trace[914169653] 'compare'  (duration: 80.9741ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-11T19:02:13.355738Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.5374ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:expand-controller\" ","response":"range_response_count:1 size:880"}
	{"level":"info","ts":"2023-10-11T19:02:13.355799Z","caller":"traceutil/trace.go:171","msg":"trace[55879089] range","detail":"{range_begin:/registry/clusterroles/system:controller:expand-controller; range_end:; response_count:1; response_revision:448; }","duration":"100.6004ms","start":"2023-10-11T19:02:13.255169Z","end":"2023-10-11T19:02:13.355769Z","steps":["trace[55879089] 'agreement among raft nodes before linearized reading'  (duration: 100.4497ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-11T19:02:15.90109Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":9722580140622618014,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2023-10-11T19:02:16.401603Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":9722580140622618014,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2023-10-11T19:02:16.736524Z","caller":"wal/wal.go:805","msg":"slow fdatasync","took":"1.2920234s","expected-duration":"1s"}
	{"level":"info","ts":"2023-10-11T19:02:16.741605Z","caller":"traceutil/trace.go:171","msg":"trace[1014811420] linearizableReadLoop","detail":"{readStateIndex:530; appliedIndex:529; }","duration":"1.3416101s","start":"2023-10-11T19:02:15.399654Z","end":"2023-10-11T19:02:16.741264Z","steps":["trace[1014811420] 'read index received'  (duration: 1.339411s)","trace[1014811420] 'applied index is now lower than readState.Index'  (duration: 2.1964ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-11T19:02:16.741603Z","caller":"traceutil/trace.go:171","msg":"trace[317881645] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"1.3419286s","start":"2023-10-11T19:02:15.399586Z","end":"2023-10-11T19:02:16.741515Z","steps":["trace[317881645] 'process raft request'  (duration: 1.3399413s)"],"step_count":1}
	{"level":"warn","ts":"2023-10-11T19:02:16.742374Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.3424521s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-375900\" ","response":"range_response_count:1 size:5306"}
	{"level":"info","ts":"2023-10-11T19:02:16.74253Z","caller":"traceutil/trace.go:171","msg":"trace[827837724] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-375900; range_end:; response_count:1; response_revision:487; }","duration":"1.3428793s","start":"2023-10-11T19:02:15.399633Z","end":"2023-10-11T19:02:16.742512Z","steps":["trace[827837724] 'agreement among raft nodes before linearized reading'  (duration: 1.3420964s)"],"step_count":1}
	{"level":"warn","ts":"2023-10-11T19:02:16.74263Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-11T19:02:15.39962Z","time spent":"1.3429781s","remote":"127.0.0.1:57002","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5329,"request content":"key:\"/registry/pods/kube-system/etcd-pause-375900\" "}
	{"level":"warn","ts":"2023-10-11T19:02:16.74245Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"360.3249ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-11T19:02:16.742853Z","caller":"traceutil/trace.go:171","msg":"trace[1832068937] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:487; }","duration":"360.6066ms","start":"2023-10-11T19:02:16.382091Z","end":"2023-10-11T19:02:16.742715Z","steps":["trace[1832068937] 'agreement among raft nodes before linearized reading'  (duration: 360.255ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-11T19:02:16.742865Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-11T19:02:15.399553Z","time spent":"1.3425813s","remote":"127.0.0.1:57002","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7340,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-375900\" mod_revision:485 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-375900\" value_size:7278 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-375900\" > >"}
	{"level":"warn","ts":"2023-10-11T19:02:16.742913Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-11T19:02:16.382071Z","time spent":"360.8251ms","remote":"127.0.0.1:57026","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-10-11T19:02:30.569871Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.1671ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-11T19:02:30.570063Z","caller":"traceutil/trace.go:171","msg":"trace[802662111] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:499; }","duration":"181.3732ms","start":"2023-10-11T19:02:30.388666Z","end":"2023-10-11T19:02:30.570039Z","steps":["trace[802662111] 'range keys from in-memory index tree'  (duration: 181.0413ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  19:02:39 up  1:16,  0 users,  load average: 8.45, 8.45, 5.43
	Linux pause-375900 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kube-apiserver [87b2c2f958e0] <==
	* Trace[128862297]: ["GuaranteedUpdate etcd3" audit-id:808c9a53-bb68-428b-ab21-5dff646d982a,key:/minions/pause-375900,type:*core.Node,resource:nodes 637ms (19:02:11.482)
	Trace[128862297]:  ---"Txn call completed" 625ms (19:02:12.118)]
	Trace[128862297]: ---"Object stored in database" 626ms (19:02:12.118)
	Trace[128862297]: [638.0845ms] [638.0845ms] END
	I1011 19:02:12.120182       1 trace.go:236] Trace[1833827336]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:2a8a8ca5-a359-4b8c-9805-787bddfac120,client:192.168.85.2,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-375900/status,user-agent:kubelet/v1.28.2 (linux/amd64) kubernetes/89a4ea3,verb:PATCH (11-Oct-2023 19:02:11.451) (total time: 668ms):
	Trace[1833827336]: ["GuaranteedUpdate etcd3" audit-id:2a8a8ca5-a359-4b8c-9805-787bddfac120,key:/pods/kube-system/kube-scheduler-pause-375900,type:*core.Pod,resource:pods 667ms (19:02:11.452)
	Trace[1833827336]:  ---"Txn call completed" 648ms (19:02:12.118)]
	Trace[1833827336]: ---"About to check admission control" 17ms (19:02:11.469)
	Trace[1833827336]: ---"Object stored in database" 649ms (19:02:12.118)
	Trace[1833827336]: [668.1182ms] [668.1182ms] END
	I1011 19:02:15.053185       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1011 19:02:15.080338       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1011 19:02:15.208169       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1011 19:02:15.285029       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1011 19:02:15.312107       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1011 19:02:16.748086       1 trace.go:236] Trace[1550518484]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:54105783-65c2-42ef-944d-0ecc2f252337,client:192.168.85.2,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-375900/status,user-agent:kubelet/v1.28.2 (linux/amd64) kubernetes/89a4ea3,verb:PATCH (11-Oct-2023 19:02:15.392) (total time: 1355ms):
	Trace[1550518484]: ["GuaranteedUpdate etcd3" audit-id:54105783-65c2-42ef-944d-0ecc2f252337,key:/pods/kube-system/kube-apiserver-pause-375900,type:*core.Pod,resource:pods 1355ms (19:02:15.392)
	Trace[1550518484]:  ---"Txn call completed" 1346ms (19:02:16.745)]
	Trace[1550518484]: ---"Object stored in database" 1348ms (19:02:16.747)
	Trace[1550518484]: [1.3552742s] [1.3552742s] END
	I1011 19:02:16.748405       1 trace.go:236] Trace[1051856289]: "Get" accept:application/json, */*,audit-id:b794ef53-4bb7-48fd-af0e-5e1628609e9e,client:192.168.85.1,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/etcd-pause-375900,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:GET (11-Oct-2023 19:02:15.398) (total time: 1349ms):
	Trace[1051856289]: ---"About to write a response" 1348ms (19:02:16.746)
	Trace[1051856289]: [1.3495733s] [1.3495733s] END
	I1011 19:02:25.762350       1 controller.go:624] quota admission added evaluator for: endpoints
	I1011 19:02:25.770692       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [8dd1f463809a] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 19:01:55.176922       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 19:01:55.198664       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1011 19:01:55.359395       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [3aa4b3b7ee94] <==
	* I1011 19:02:25.663992       1 shared_informer.go:318] Caches are synced for node
	I1011 19:02:25.664032       1 shared_informer.go:318] Caches are synced for namespace
	I1011 19:02:25.668044       1 shared_informer.go:318] Caches are synced for daemon sets
	I1011 19:02:25.668229       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1011 19:02:25.668314       1 taint_manager.go:211] "Sending events to api server"
	I1011 19:02:25.668023       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1011 19:02:25.668483       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-375900"
	I1011 19:02:25.668669       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1011 19:02:25.668112       1 event.go:307] "Event occurred" object="pause-375900" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-375900 event: Registered Node pause-375900 in Controller"
	I1011 19:02:25.668915       1 range_allocator.go:174] "Sending events to api server"
	I1011 19:02:25.668962       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1011 19:02:25.668975       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1011 19:02:25.668989       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1011 19:02:25.752008       1 shared_informer.go:318] Caches are synced for attach detach
	I1011 19:02:25.752703       1 shared_informer.go:318] Caches are synced for resource quota
	I1011 19:02:25.756519       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1011 19:02:25.781060       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1011 19:02:25.858230       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1011 19:02:25.858366       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1011 19:02:25.858390       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1011 19:02:25.858413       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1011 19:02:25.858608       1 shared_informer.go:318] Caches are synced for resource quota
	I1011 19:02:26.157975       1 shared_informer.go:318] Caches are synced for garbage collector
	I1011 19:02:26.166826       1 shared_informer.go:318] Caches are synced for garbage collector
	I1011 19:02:26.166976       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [46a29adb775e] <==
	* I1011 19:01:33.313301       1 serving.go:348] Generated self-signed cert in-memory
	I1011 19:01:33.754258       1 controllermanager.go:189] "Starting" version="v1.28.2"
	I1011 19:01:33.754409       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 19:01:33.757529       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1011 19:01:33.757580       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1011 19:01:33.758134       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I1011 19:01:33.758376       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E1011 19:01:49.241727       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/start-system-namespaces-contro
ller ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	* 
	* ==> kube-proxy [19db40dfaf81] <==
	* I1011 19:02:13.573061       1 server_others.go:69] "Using iptables proxy"
	I1011 19:02:13.699987       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1011 19:02:13.793674       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1011 19:02:13.851304       1 server_others.go:152] "Using iptables Proxier"
	I1011 19:02:13.851484       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1011 19:02:13.851502       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1011 19:02:13.851550       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1011 19:02:13.852374       1 server.go:846] "Version info" version="v1.28.2"
	I1011 19:02:13.852491       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 19:02:13.854311       1 config.go:188] "Starting service config controller"
	I1011 19:02:13.854330       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1011 19:02:13.854433       1 config.go:315] "Starting node config controller"
	I1011 19:02:13.854467       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1011 19:02:13.854509       1 config.go:97] "Starting endpoint slice config controller"
	I1011 19:02:13.854526       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1011 19:02:13.955113       1 shared_informer.go:318] Caches are synced for node config
	I1011 19:02:13.955583       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1011 19:02:13.955690       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-proxy [e3fc1b46e1fe] <==
	* I1011 19:01:29.069795       1 server_others.go:69] "Using iptables proxy"
	E1011 19:01:29.152445       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-375900": dial tcp 192.168.85.2:8443: connect: connection refused
	I1011 19:01:39.130708       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1011 19:01:39.198253       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1011 19:01:39.254944       1 server_others.go:152] "Using iptables Proxier"
	I1011 19:01:39.255176       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1011 19:01:39.255191       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1011 19:01:39.255240       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1011 19:01:39.256879       1 server.go:846] "Version info" version="v1.28.2"
	I1011 19:01:39.256979       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 19:01:39.259709       1 config.go:188] "Starting service config controller"
	I1011 19:01:39.259839       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1011 19:01:39.261956       1 config.go:97] "Starting endpoint slice config controller"
	I1011 19:01:39.262112       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1011 19:01:39.262625       1 config.go:315] "Starting node config controller"
	I1011 19:01:39.262781       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1011 19:01:39.362414       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1011 19:01:39.362575       1 shared_informer.go:318] Caches are synced for service config
	I1011 19:01:39.363088       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [29048537c048] <==
	* I1011 19:02:08.072947       1 serving.go:348] Generated self-signed cert in-memory
	I1011 19:02:11.480091       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1011 19:02:11.480165       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 19:02:12.122039       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1011 19:02:12.122084       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1011 19:02:12.122339       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1011 19:02:12.122365       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1011 19:02:12.122426       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1011 19:02:12.122448       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1011 19:02:12.123443       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1011 19:02:12.126363       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1011 19:02:12.222582       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1011 19:02:12.251602       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1011 19:02:12.251935       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [8cbb52bc4624] <==
	* I1011 19:01:32.290306       1 serving.go:348] Generated self-signed cert in-memory
	W1011 19:01:35.352673       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1011 19:01:35.352720       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1011 19:01:35.352742       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1011 19:01:35.352756       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1011 19:01:35.475728       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1011 19:01:35.475934       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 19:01:35.478730       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1011 19:01:35.479138       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1011 19:01:35.479222       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1011 19:01:35.479265       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1011 19:01:35.579841       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1011 19:01:49.456664       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I1011 19:01:49.456885       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1011 19:01:49.457764       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1011 19:01:49.458109       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* Oct 11 19:02:02 pause-375900 kubelet[6863]: I1011 19:02:02.808989    6863 kubelet_node_status.go:70] "Attempting to register node" node="pause-375900"
	Oct 11 19:02:02 pause-375900 kubelet[6863]: E1011 19:02:02.810057    6863 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.85.2:8443: connect: connection refused" node="pause-375900"
	Oct 11 19:02:02 pause-375900 kubelet[6863]: W1011 19:02:02.852465    6863 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	Oct 11 19:02:02 pause-375900 kubelet[6863]: E1011 19:02:02.852754    6863 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	Oct 11 19:02:02 pause-375900 kubelet[6863]: I1011 19:02:02.885189    6863 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6945e4707c4df6551d8db4c0565a0dfa48ebe53e5a7604933a086394a40530a5"
	Oct 11 19:02:04 pause-375900 kubelet[6863]: E1011 19:02:04.162928    6863 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-375900?timeout=10s\": dial tcp 192.168.85.2:8443: connect: connection refused" interval="3.2s"
	Oct 11 19:02:04 pause-375900 kubelet[6863]: W1011 19:02:04.453591    6863 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	Oct 11 19:02:04 pause-375900 kubelet[6863]: E1011 19:02:04.453789    6863 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	Oct 11 19:02:04 pause-375900 kubelet[6863]: I1011 19:02:04.556047    6863 kubelet_node_status.go:70] "Attempting to register node" node="pause-375900"
	Oct 11 19:02:04 pause-375900 kubelet[6863]: E1011 19:02:04.557164    6863 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.85.2:8443: connect: connection refused" node="pause-375900"
	Oct 11 19:02:04 pause-375900 kubelet[6863]: W1011 19:02:04.663256    6863 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	Oct 11 19:02:04 pause-375900 kubelet[6863]: E1011 19:02:04.663515    6863 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	Oct 11 19:02:07 pause-375900 kubelet[6863]: I1011 19:02:07.776328    6863 kubelet_node_status.go:70] "Attempting to register node" node="pause-375900"
	Oct 11 19:02:10 pause-375900 kubelet[6863]: I1011 19:02:10.156694    6863 apiserver.go:52] "Watching apiserver"
	Oct 11 19:02:10 pause-375900 kubelet[6863]: I1011 19:02:10.164410    6863 topology_manager.go:215] "Topology Admit Handler" podUID="6626c9fe-763e-46b0-a66a-5bd39e157d8d" podNamespace="kube-system" podName="coredns-5dd5756b68-g2h9s"
	Oct 11 19:02:10 pause-375900 kubelet[6863]: I1011 19:02:10.165200    6863 topology_manager.go:215] "Topology Admit Handler" podUID="86829575-b97b-4960-a459-934aecb00dd5" podNamespace="kube-system" podName="kube-proxy-6wv6x"
	Oct 11 19:02:10 pause-375900 kubelet[6863]: I1011 19:02:10.259314    6863 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Oct 11 19:02:10 pause-375900 kubelet[6863]: I1011 19:02:10.354472    6863 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86829575-b97b-4960-a459-934aecb00dd5-lib-modules\") pod \"kube-proxy-6wv6x\" (UID: \"86829575-b97b-4960-a459-934aecb00dd5\") " pod="kube-system/kube-proxy-6wv6x"
	Oct 11 19:02:10 pause-375900 kubelet[6863]: I1011 19:02:10.354671    6863 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86829575-b97b-4960-a459-934aecb00dd5-xtables-lock\") pod \"kube-proxy-6wv6x\" (UID: \"86829575-b97b-4960-a459-934aecb00dd5\") " pod="kube-system/kube-proxy-6wv6x"
	Oct 11 19:02:11 pause-375900 kubelet[6863]: I1011 19:02:11.450961    6863 kubelet_node_status.go:108] "Node was previously registered" node="pause-375900"
	Oct 11 19:02:11 pause-375900 kubelet[6863]: I1011 19:02:11.451258    6863 kubelet_node_status.go:73] "Successfully registered node" node="pause-375900"
	Oct 11 19:02:11 pause-375900 kubelet[6863]: I1011 19:02:11.455639    6863 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 11 19:02:11 pause-375900 kubelet[6863]: I1011 19:02:11.458782    6863 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 11 19:02:11 pause-375900 kubelet[6863]: I1011 19:02:11.667722    6863 scope.go:117] "RemoveContainer" containerID="3a68e3e25c04267c77aff941e6b65fa079c9bfbcb4e408574e9594e032f6b4a7"
	Oct 11 19:02:11 pause-375900 kubelet[6863]: I1011 19:02:11.667901    6863 scope.go:117] "RemoveContainer" containerID="e3fc1b46e1feedc3f1e31488df9ea2030aa650e2ca70dadd726e3bc614213b11"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 19:02:37.437864    9596 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-375900 -n pause-375900
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-375900 -n pause-375900: (1.6595962s)
helpers_test.go:261: (dbg) Run:  kubectl --context pause-375900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (101.01s)

                                                
                                    

Test pass (285/314)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 8.02
4 TestDownloadOnly/v1.16.0/preload-exists 0.06
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.28
10 TestDownloadOnly/v1.28.2/json-events 10.42
11 TestDownloadOnly/v1.28.2/preload-exists 0
14 TestDownloadOnly/v1.28.2/kubectl 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.27
16 TestDownloadOnly/DeleteAll 2.6
17 TestDownloadOnly/DeleteAlwaysSucceeds 1.2
18 TestDownloadOnlyKic 4.47
19 TestBinaryMirror 3.71
20 TestOffline 170.41
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.27
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.27
25 TestAddons/Setup 481
29 TestAddons/parallel/InspektorGadget 15.47
30 TestAddons/parallel/MetricsServer 9.07
31 TestAddons/parallel/HelmTiller 46.2
33 TestAddons/parallel/CSI 100.01
34 TestAddons/parallel/Headlamp 42.26
35 TestAddons/parallel/CloudSpanner 7.31
36 TestAddons/parallel/LocalPath 93.89
37 TestAddons/parallel/NvidiaDevicePlugin 7.32
40 TestAddons/serial/GCPAuth/Namespaces 0.32
41 TestAddons/StoppedEnableDisable 14.71
42 TestCertOptions 109.07
43 TestCertExpiration 359.65
44 TestDockerFlags 89.65
45 TestForceSystemdFlag 140.74
46 TestForceSystemdEnv 114.2
53 TestErrorSpam/start 4.31
54 TestErrorSpam/status 4.91
55 TestErrorSpam/pause 5.18
56 TestErrorSpam/unpause 4.82
57 TestErrorSpam/stop 22
60 TestFunctional/serial/CopySyncFile 0.03
61 TestFunctional/serial/StartWithProxy 91.81
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 44.49
64 TestFunctional/serial/KubeContext 0.1
65 TestFunctional/serial/KubectlGetPods 0.21
68 TestFunctional/serial/CacheCmd/cache/add_remote 7.7
69 TestFunctional/serial/CacheCmd/cache/add_local 4.03
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.23
71 TestFunctional/serial/CacheCmd/cache/list 0.25
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 1.32
73 TestFunctional/serial/CacheCmd/cache/cache_reload 5.6
74 TestFunctional/serial/CacheCmd/cache/delete 0.47
75 TestFunctional/serial/MinikubeKubectlCmd 0.49
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.88
77 TestFunctional/serial/ExtraConfig 62.15
78 TestFunctional/serial/ComponentHealth 0.26
79 TestFunctional/serial/LogsCmd 2.67
80 TestFunctional/serial/LogsFileCmd 2.93
81 TestFunctional/serial/InvalidService 7.17
85 TestFunctional/parallel/DryRun 2.87
86 TestFunctional/parallel/InternationalLanguage 1.53
87 TestFunctional/parallel/StatusCmd 5.78
92 TestFunctional/parallel/AddonsCmd 0.97
93 TestFunctional/parallel/PersistentVolumeClaim 85.12
95 TestFunctional/parallel/SSHCmd 3.42
96 TestFunctional/parallel/CpCmd 6.83
97 TestFunctional/parallel/MySQL 110
98 TestFunctional/parallel/FileSync 1.41
99 TestFunctional/parallel/CertSync 8.7
103 TestFunctional/parallel/NodeLabels 0.2
105 TestFunctional/parallel/NonActiveRuntimeDisabled 1.86
107 TestFunctional/parallel/License 3.07
109 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.9
110 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
112 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 22.05
113 TestFunctional/parallel/ServiceCmd/DeployApp 33.36
114 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.21
119 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.23
120 TestFunctional/parallel/ProfileCmd/profile_not_create 2.13
121 TestFunctional/parallel/ProfileCmd/profile_list 1.73
122 TestFunctional/parallel/ProfileCmd/profile_json_output 1.8
123 TestFunctional/parallel/ServiceCmd/List 2.14
124 TestFunctional/parallel/ServiceCmd/JSONOutput 2.28
125 TestFunctional/parallel/Version/short 0.25
126 TestFunctional/parallel/Version/components 2.8
127 TestFunctional/parallel/ServiceCmd/HTTPS 15.02
128 TestFunctional/parallel/ImageCommands/ImageListShort 1.79
129 TestFunctional/parallel/ImageCommands/ImageListTable 1.19
130 TestFunctional/parallel/ImageCommands/ImageListJson 1.49
131 TestFunctional/parallel/ImageCommands/ImageListYaml 1.87
132 TestFunctional/parallel/ImageCommands/ImageBuild 12.18
133 TestFunctional/parallel/ImageCommands/Setup 4.17
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 18.24
135 TestFunctional/parallel/ServiceCmd/Format 15.03
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.24
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 13.87
138 TestFunctional/parallel/ServiceCmd/URL 15.03
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 4.58
140 TestFunctional/parallel/DockerEnv/powershell 10.37
141 TestFunctional/parallel/ImageCommands/ImageRemove 2.1
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 9.6
143 TestFunctional/parallel/UpdateContextCmd/no_changes 1.09
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.92
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.96
146 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 9.42
147 TestFunctional/delete_addon-resizer_images 1.58
148 TestFunctional/delete_my-image_image 0.2
149 TestFunctional/delete_minikube_cached_images 0.19
153 TestImageBuild/serial/Setup 74.32
154 TestImageBuild/serial/NormalBuild 4.21
155 TestImageBuild/serial/BuildWithBuildArg 2.58
156 TestImageBuild/serial/BuildWithDockerIgnore 3.58
157 TestImageBuild/serial/BuildWithSpecifiedDockerfile 2.48
160 TestIngressAddonLegacy/StartLegacyK8sCluster 131.45
162 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 51.55
163 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 1.94
167 TestJSONOutput/start/Command 87.63
168 TestJSONOutput/start/Audit 0
170 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/pause/Command 1.68
174 TestJSONOutput/pause/Audit 0
176 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/unpause/Command 1.54
180 TestJSONOutput/unpause/Audit 0
182 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/stop/Command 13.02
186 TestJSONOutput/stop/Audit 0
188 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
190 TestErrorJSONOutput 1.37
192 TestKicCustomNetwork/create_custom_network 81.85
193 TestKicCustomNetwork/use_default_bridge_network 78.74
194 TestKicExistingNetwork 82.53
195 TestKicCustomSubnet 83.87
196 TestKicStaticIP 81.73
197 TestMainNoArgs 0.22
198 TestMinikubeProfile 156.98
201 TestMountStart/serial/StartWithMountFirst 21.8
202 TestMountStart/serial/VerifyMountFirst 1.09
203 TestMountStart/serial/StartWithMountSecond 19.5
204 TestMountStart/serial/VerifyMountSecond 1.14
205 TestMountStart/serial/DeleteFirst 4.23
206 TestMountStart/serial/VerifyMountPostDelete 1.12
207 TestMountStart/serial/Stop 2.61
208 TestMountStart/serial/RestartStopped 13.57
209 TestMountStart/serial/VerifyMountPostStop 1.14
212 TestMultiNode/serial/FreshStart2Nodes 160.37
213 TestMultiNode/serial/DeployApp2Nodes 25.63
214 TestMultiNode/serial/PingHostFrom2Pods 2.76
215 TestMultiNode/serial/AddNode 58.3
216 TestMultiNode/serial/ProfileList 1.24
217 TestMultiNode/serial/CopyFile 41
218 TestMultiNode/serial/StopNode 6.81
219 TestMultiNode/serial/StartAfterStop 24.97
220 TestMultiNode/serial/RestartKeepsNodes 156.91
221 TestMultiNode/serial/DeleteNode 14.49
222 TestMultiNode/serial/StopMultiNode 25.58
223 TestMultiNode/serial/RestartMultiNode 103.64
224 TestMultiNode/serial/ValidateNameConflict 76.27
228 TestPreload 216.05
229 TestScheduledStopWindows 147.05
233 TestInsufficientStorage 54.46
234 TestRunningBinaryUpgrade 296.63
236 TestKubernetesUpgrade 395.83
237 TestMissingContainerUpgrade 373.28
239 TestNoKubernetes/serial/StartNoK8sWithVersion 0.45
240 TestNoKubernetes/serial/StartWithK8s 125.09
241 TestNoKubernetes/serial/StartWithStopK8s 33.53
242 TestNoKubernetes/serial/Start 49.48
243 TestNoKubernetes/serial/VerifyK8sNotRunning 1.49
244 TestNoKubernetes/serial/ProfileList 7.94
245 TestNoKubernetes/serial/Stop 8.24
246 TestNoKubernetes/serial/StartNoArgs 22.47
247 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 1.65
248 TestStoppedBinaryUpgrade/Setup 0.63
249 TestStoppedBinaryUpgrade/Upgrade 219.82
250 TestStoppedBinaryUpgrade/MinikubeLogs 4.35
259 TestPause/serial/Start 130.82
273 TestStartStop/group/old-k8s-version/serial/FirstStart 197.95
275 TestStartStop/group/no-preload/serial/FirstStart 173.14
277 TestStartStop/group/embed-certs/serial/FirstStart 122.18
279 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 116.39
280 TestStartStop/group/no-preload/serial/DeployApp 12.06
281 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.18
282 TestStartStop/group/no-preload/serial/Stop 12.96
283 TestStartStop/group/old-k8s-version/serial/DeployApp 11.16
284 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 1.21
285 TestStartStop/group/no-preload/serial/SecondStart 368.81
286 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.84
287 TestStartStop/group/old-k8s-version/serial/Stop 13.19
288 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.36
289 TestStartStop/group/embed-certs/serial/DeployApp 12.15
290 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 1.17
291 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.14
292 TestStartStop/group/old-k8s-version/serial/SecondStart 472.4
293 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.08
294 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.25
295 TestStartStop/group/embed-certs/serial/Stop 13.42
296 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 1.26
297 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 1.26
298 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 385.19
299 TestStartStop/group/embed-certs/serial/SecondStart 371.06
300 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 60.13
301 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 39.15
302 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 52.14
303 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.49
304 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 1.79
305 TestStartStop/group/no-preload/serial/Pause 14.02
306 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 22.85
308 TestStartStop/group/newest-cni/serial/FirstStart 169.17
309 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 2.64
310 TestStartStop/group/embed-certs/serial/Pause 18.13
311 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.64
312 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 2.14
313 TestStartStop/group/default-k8s-diff-port/serial/Pause 16.17
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 55.17
315 TestNetworkPlugins/group/auto/Start 140.71
316 TestNetworkPlugins/group/kindnet/Start 146.51
317 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.05
318 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 1.55
319 TestStartStop/group/old-k8s-version/serial/Pause 13.73
320 TestNetworkPlugins/group/calico/Start 256.14
321 TestStartStop/group/newest-cni/serial/DeployApp 0
322 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.91
323 TestStartStop/group/newest-cni/serial/Stop 13.62
324 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 1.46
325 TestStartStop/group/newest-cni/serial/SecondStart 67.01
326 TestNetworkPlugins/group/auto/KubeletFlags 1.55
327 TestNetworkPlugins/group/auto/NetCatPod 22.3
328 TestNetworkPlugins/group/kindnet/ControllerPod 5.07
329 TestNetworkPlugins/group/kindnet/KubeletFlags 1.61
330 TestNetworkPlugins/group/auto/DNS 0.65
331 TestNetworkPlugins/group/kindnet/NetCatPod 25.17
332 TestNetworkPlugins/group/auto/Localhost 0.55
333 TestNetworkPlugins/group/auto/HairPin 0.64
334 TestNetworkPlugins/group/kindnet/DNS 0.63
335 TestNetworkPlugins/group/kindnet/Localhost 0.65
336 TestNetworkPlugins/group/kindnet/HairPin 0.58
337 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 2.12
340 TestStartStop/group/newest-cni/serial/Pause 19.76
341 TestNetworkPlugins/group/custom-flannel/Start 150.15
342 TestNetworkPlugins/group/false/Start 122.94
343 TestNetworkPlugins/group/enable-default-cni/Start 117.8
344 TestNetworkPlugins/group/calico/ControllerPod 5.15
345 TestNetworkPlugins/group/calico/KubeletFlags 1.69
346 TestNetworkPlugins/group/calico/NetCatPod 27.42
347 TestNetworkPlugins/group/calico/DNS 0.53
348 TestNetworkPlugins/group/calico/Localhost 0.5
349 TestNetworkPlugins/group/calico/HairPin 0.67
350 TestNetworkPlugins/group/false/KubeletFlags 2.12
351 TestNetworkPlugins/group/custom-flannel/KubeletFlags 1.88
352 TestNetworkPlugins/group/custom-flannel/NetCatPod 28.64
353 TestNetworkPlugins/group/false/NetCatPod 28.45
354 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 1.62
355 TestNetworkPlugins/group/enable-default-cni/NetCatPod 27.95
356 TestNetworkPlugins/group/false/DNS 0.95
357 TestNetworkPlugins/group/custom-flannel/DNS 0.94
358 TestNetworkPlugins/group/false/Localhost 0.69
359 TestNetworkPlugins/group/custom-flannel/Localhost 0.52
360 TestNetworkPlugins/group/false/HairPin 0.73
361 TestNetworkPlugins/group/custom-flannel/HairPin 0.71
362 TestNetworkPlugins/group/enable-default-cni/DNS 0.87
363 TestNetworkPlugins/group/enable-default-cni/Localhost 0.6
364 TestNetworkPlugins/group/enable-default-cni/HairPin 0.91
365 TestNetworkPlugins/group/flannel/Start 173.63
366 TestNetworkPlugins/group/bridge/Start 121.34
367 TestNetworkPlugins/group/kubenet/Start 141.55
368 TestNetworkPlugins/group/flannel/ControllerPod 5.08
369 TestNetworkPlugins/group/flannel/KubeletFlags 1.41
370 TestNetworkPlugins/group/bridge/KubeletFlags 1.55
371 TestNetworkPlugins/group/flannel/NetCatPod 25.25
372 TestNetworkPlugins/group/bridge/NetCatPod 24.25
373 TestNetworkPlugins/group/bridge/DNS 0.45
374 TestNetworkPlugins/group/flannel/DNS 0.58
375 TestNetworkPlugins/group/bridge/Localhost 0.45
376 TestNetworkPlugins/group/flannel/Localhost 0.51
377 TestNetworkPlugins/group/bridge/HairPin 0.38
378 TestNetworkPlugins/group/flannel/HairPin 0.41
379 TestNetworkPlugins/group/kubenet/KubeletFlags 1.31
380 TestNetworkPlugins/group/kubenet/NetCatPod 22.95
381 TestNetworkPlugins/group/kubenet/DNS 0.44
382 TestNetworkPlugins/group/kubenet/Localhost 0.39
383 TestNetworkPlugins/group/kubenet/HairPin 0.47
x
+
TestDownloadOnly/v1.16.0/json-events (8.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-673900 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-673900 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker: (8.0233489s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (8.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-673900
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-673900: exit status 85 (280.1275ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-673900 | minikube2\jenkins | v1.31.2 | 11 Oct 23 17:50 UTC |          |
	|         | -p download-only-673900        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/11 17:50:59
	Running on machine: minikube2
	Binary: Built with gc go1.21.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 17:50:59.814620    6192 out.go:296] Setting OutFile to fd 584 ...
	I1011 17:50:59.815807    6192 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1011 17:50:59.815807    6192 out.go:309] Setting ErrFile to fd 588...
	I1011 17:50:59.815935    6192 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1011 17:50:59.826577    6192 root.go:314] Error reading config file at C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I1011 17:50:59.835955    6192 out.go:303] Setting JSON to true
	I1011 17:50:59.839113    6192 start.go:128] hostinfo: {"hostname":"minikube2","uptime":771,"bootTime":1697045888,"procs":146,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3570 Build 19045.3570","kernelVersion":"10.0.19045.3570 Build 19045.3570","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1011 17:50:59.839113    6192 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1011 17:50:59.845284    6192 out.go:97] [download-only-673900] minikube v1.31.2 on Microsoft Windows 10 Enterprise N 10.0.19045.3570 Build 19045.3570
	W1011 17:50:59.846512    6192 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I1011 17:50:59.846512    6192 notify.go:220] Checking for updates...
	I1011 17:50:59.849247    6192 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1011 17:50:59.853726    6192 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1011 17:50:59.869692    6192 out.go:169] MINIKUBE_LOCATION=17402
	I1011 17:50:59.872210    6192 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1011 17:50:59.877503    6192 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1011 17:50:59.878144    6192 driver.go:378] Setting default libvirt URI to qemu:///system
	I1011 17:51:00.137849    6192 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.1 (123237)
	I1011 17:51:00.146126    6192 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 17:51:00.570724    6192 info.go:266] docker info: {ID:fddc6918-7749-4ebe-a6e7-06311fb56dc1 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2023-10-11 17:51:00.4960278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:
Docker Inc. Version:v0.2.20] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1011 17:51:00.575744    6192 out.go:97] Using the docker driver based on user configuration
	I1011 17:51:00.575744    6192 start.go:298] selected driver: docker
	I1011 17:51:00.575744    6192 start.go:902] validating driver "docker" against <nil>
	I1011 17:51:00.587246    6192 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 17:51:00.986240    6192 info.go:266] docker info: {ID:fddc6918-7749-4ebe-a6e7-06311fb56dc1 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2023-10-11 17:51:00.9222365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:
Docker Inc. Version:v0.2.20] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1011 17:51:00.986883    6192 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1011 17:51:01.120728    6192 start_flags.go:386] Using suggested 16300MB memory alloc based on sys=65534MB, container=51405MB
	I1011 17:51:01.121463    6192 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1011 17:51:01.124350    6192 out.go:169] Using Docker Desktop driver with root privileges
	I1011 17:51:01.127285    6192 cni.go:84] Creating CNI manager for ""
	I1011 17:51:01.127732    6192 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1011 17:51:01.127732    6192 start_flags.go:323] config:
	{Name:download-only-673900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-673900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1011 17:51:01.128357    6192 out.go:97] Starting control plane node download-only-673900 in cluster download-only-673900
	I1011 17:51:01.128357    6192 cache.go:122] Beginning downloading kic base image for docker with docker
	I1011 17:51:01.131100    6192 out.go:97] Pulling base image ...
	I1011 17:51:01.133210    6192 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1011 17:51:01.133295    6192 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1011 17:51:01.184114    6192 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1011 17:51:01.192095    6192 cache.go:57] Caching tarball of preloaded images
	I1011 17:51:01.192314    6192 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1011 17:51:01.195454    6192 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1011 17:51:01.195487    6192 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1011 17:51:01.265693    6192 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1011 17:51:01.302797    6192 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1011 17:51:01.302797    6192 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.40-1696360059-17345@sha256_76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae.tar
	I1011 17:51:01.303319    6192 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.40-1696360059-17345@sha256_76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae.tar
	I1011 17:51:01.303426    6192 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory
	I1011 17:51:01.304606    6192 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-673900"

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 17:51:07.841773    9160 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (10.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-673900 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-673900 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=docker: (10.4200874s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (10.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
--- PASS: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-673900
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-673900: exit status 85 (271.8992ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-673900 | minikube2\jenkins | v1.31.2 | 11 Oct 23 17:50 UTC |          |
	|         | -p download-only-673900        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	| start   | -o=json --download-only        | download-only-673900 | minikube2\jenkins | v1.31.2 | 11 Oct 23 17:51 UTC |          |
	|         | -p download-only-673900        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/11 17:51:08
	Running on machine: minikube2
	Binary: Built with gc go1.21.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 17:51:08.198076    3132 out.go:296] Setting OutFile to fd 632 ...
	I1011 17:51:08.198076    3132 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1011 17:51:08.198076    3132 out.go:309] Setting ErrFile to fd 628...
	I1011 17:51:08.199171    3132 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1011 17:51:08.208418    3132 root.go:314] Error reading config file at C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I1011 17:51:08.239672    3132 out.go:303] Setting JSON to true
	I1011 17:51:08.243010    3132 start.go:128] hostinfo: {"hostname":"minikube2","uptime":779,"bootTime":1697045888,"procs":147,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3570 Build 19045.3570","kernelVersion":"10.0.19045.3570 Build 19045.3570","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1011 17:51:08.243010    3132 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1011 17:51:08.678020    3132 out.go:97] [download-only-673900] minikube v1.31.2 on Microsoft Windows 10 Enterprise N 10.0.19045.3570 Build 19045.3570
	I1011 17:51:08.691491    3132 notify.go:220] Checking for updates...
	I1011 17:51:08.701738    3132 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1011 17:51:08.705889    3132 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1011 17:51:08.712035    3132 out.go:169] MINIKUBE_LOCATION=17402
	I1011 17:51:08.717102    3132 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1011 17:51:08.725823    3132 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1011 17:51:08.728004    3132 config.go:182] Loaded profile config "download-only-673900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1011 17:51:08.728930    3132 start.go:810] api.Load failed for download-only-673900: filestore "download-only-673900": Docker machine "download-only-673900" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1011 17:51:08.729189    3132 driver.go:378] Setting default libvirt URI to qemu:///system
	W1011 17:51:08.729373    3132 start.go:810] api.Load failed for download-only-673900: filestore "download-only-673900": Docker machine "download-only-673900" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1011 17:51:09.023264    3132 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.1 (123237)
	I1011 17:51:09.029450    3132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 17:51:09.403732    3132 info.go:266] docker info: {ID:fddc6918-7749-4ebe-a6e7-06311fb56dc1 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2023-10-11 17:51:09.3443298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:
Docker Inc. Version:v0.2.20] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1011 17:51:09.407998    3132 out.go:97] Using the docker driver based on existing profile
	I1011 17:51:09.407998    3132 start.go:298] selected driver: docker
	I1011 17:51:09.408104    3132 start.go:902] validating driver "docker" against &{Name:download-only-673900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-673900 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1011 17:51:09.418645    3132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 17:51:09.798341    3132 info.go:266] docker info: {ID:fddc6918-7749-4ebe-a6e7-06311fb56dc1 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2023-10-11 17:51:09.7335498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:
Docker Inc. Version:v0.2.20] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1011 17:51:09.842680    3132 cni.go:84] Creating CNI manager for ""
	I1011 17:51:09.842680    3132 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1011 17:51:09.842680    3132 start_flags.go:323] config:
	{Name:download-only-673900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-673900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseIn
terval:1m0s GPUs:}
	I1011 17:51:10.194181    3132 out.go:97] Starting control plane node download-only-673900 in cluster download-only-673900
	I1011 17:51:10.196601    3132 cache.go:122] Beginning downloading kic base image for docker with docker
	I1011 17:51:10.199671    3132 out.go:97] Pulling base image ...
	I1011 17:51:10.199755    3132 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1011 17:51:10.199755    3132 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1011 17:51:10.250522    3132 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1011 17:51:10.250522    3132 cache.go:57] Caching tarball of preloaded images
	I1011 17:51:10.250522    3132 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1011 17:51:10.253717    3132 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I1011 17:51:10.254322    3132 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 ...
	I1011 17:51:10.318020    3132 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4?checksum=md5:30a5cb95ef165c1e9196502a3ab2be2b -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1011 17:51:10.381800    3132 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1011 17:51:10.381800    3132 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.40-1696360059-17345@sha256_76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae.tar
	I1011 17:51:10.381800    3132 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.40-1696360059-17345@sha256_76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae.tar
	I1011 17:51:10.381800    3132 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory
	I1011 17:51:10.381800    3132 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory, skipping pull
	I1011 17:51:10.382333    3132 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in cache, skipping pull
	I1011 17:51:10.382377    3132 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae as a tarball
	I1011 17:51:15.643378    3132 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 ...
	I1011 17:51:15.645283    3132 preload.go:256] verifying checksum of C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 ...
	I1011 17:51:16.627021    3132 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1011 17:51:16.627350    3132 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\download-only-673900\config.json ...
	I1011 17:51:16.628522    3132 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1011 17:51:16.630570    3132 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\windows\amd64\v1.28.2/kubectl.exe
	I1011 17:51:17.646324    3132 cache.go:195] Successfully downloaded all kic artifacts
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-673900"

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 17:51:18.539763    3596 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.27s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (2.6s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:187: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (2.5948992s)
--- PASS: TestDownloadOnly/DeleteAll (2.60s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (1.2s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-673900
aaa_download_only_test.go:199: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-673900: (1.1949553s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (1.20s)

                                                
                                    
x
+
TestDownloadOnlyKic (4.47s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-615500 --alsologtostderr --driver=docker
aaa_download_only_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-615500 --alsologtostderr --driver=docker: (1.8445118s)
helpers_test.go:175: Cleaning up "download-docker-615500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-615500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-615500: (1.5508457s)
--- PASS: TestDownloadOnlyKic (4.47s)

                                                
                                    
x
+
TestBinaryMirror (3.71s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-081400 --alsologtostderr --binary-mirror http://127.0.0.1:64312 --driver=docker
aaa_download_only_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-081400 --alsologtostderr --binary-mirror http://127.0.0.1:64312 --driver=docker: (1.9789004s)
helpers_test.go:175: Cleaning up "binary-mirror-081400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-081400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p binary-mirror-081400: (1.5106916s)
--- PASS: TestBinaryMirror (3.71s)

                                                
                                    
x
+
TestOffline (170.41s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-044100 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-044100 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (2m28.7245089s)
helpers_test.go:175: Cleaning up "offline-docker-044100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-044100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-044100: (21.6805511s)
--- PASS: TestOffline (170.41s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.27s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-642200
addons_test.go:927: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-642200: exit status 85 (270.4092ms)

                                                
                                                
-- stdout --
	* Profile "addons-642200" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-642200"

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 17:51:32.469690    7940 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.27s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.27s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-642200
addons_test.go:938: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-642200: exit status 85 (270.4038ms)

                                                
                                                
-- stdout --
	* Profile "addons-642200" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-642200"

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 17:51:32.475833    1696 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.27s)

                                                
                                    
x
+
TestAddons/Setup (481s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-642200 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-642200 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (8m1.0013766s)
--- PASS: TestAddons/Setup (481.00s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (15.47s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-76c4r" [1a9993c8-6067-44d2-b0a7-3d65a7842ffa] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.2331433s
addons_test.go:840: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-642200
addons_test.go:840: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-642200: (10.2251841s)
--- PASS: TestAddons/parallel/InspektorGadget (15.47s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (9.07s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 111.3988ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-bxlcd" [17cba931-1303-4ce6-b1ef-26231d376155] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.2104368s
addons_test.go:414: (dbg) Run:  kubectl --context addons-642200 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-642200 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:431: (dbg) Done: out/minikube-windows-amd64.exe -p addons-642200 addons disable metrics-server --alsologtostderr -v=1: (3.5466036s)
--- PASS: TestAddons/parallel/MetricsServer (9.07s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (46.2s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 49.1243ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-65hms" [61c03c70-01fa-4bbd-8c11-bb8fb5f6f7d4] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.135095s
addons_test.go:472: (dbg) Run:  kubectl --context addons-642200 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-642200 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (36.0800369s)
addons_test.go:477: kubectl --context addons-642200 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:489: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-642200 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:489: (dbg) Done: out/minikube-windows-amd64.exe -p addons-642200 addons disable helm-tiller --alsologtostderr -v=1: (4.9215329s)
--- PASS: TestAddons/parallel/HelmTiller (46.20s)

                                                
                                    
x
+
TestAddons/parallel/CSI (100.01s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 118.7406ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-642200 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:563: (dbg) Done: kubectl --context addons-642200 create -f testdata\csi-hostpath-driver\pvc.yaml: (1.3545433s)
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-642200 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:573: (dbg) Done: kubectl --context addons-642200 create -f testdata\csi-hostpath-driver\pv-pod.yaml: (2.4927869s)
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8d76a702-aed1-4394-9d74-d3771f2c0652] Pending
helpers_test.go:344: "task-pv-pod" [8d76a702-aed1-4394-9d74-d3771f2c0652] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [8d76a702-aed1-4394-9d74-d3771f2c0652] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 41.1855122s
addons_test.go:583: (dbg) Run:  kubectl --context addons-642200 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-642200 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-642200 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-642200 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-642200 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-642200 delete pod task-pv-pod: (4.2779985s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-642200 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-642200 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-642200 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [4e86be75-1236-4380-a0ef-308267ef09e8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [4e86be75-1236-4380-a0ef-308267ef09e8] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.1140857s
addons_test.go:625: (dbg) Run:  kubectl --context addons-642200 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-642200 delete pod task-pv-pod-restore: (3.7415712s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-642200 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-642200 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-642200 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-windows-amd64.exe -p addons-642200 addons disable csi-hostpath-driver --alsologtostderr -v=1: (15.6213427s)
addons_test.go:641: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-642200 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:641: (dbg) Done: out/minikube-windows-amd64.exe -p addons-642200 addons disable volumesnapshots --alsologtostderr -v=1: (4.5313545s)
--- PASS: TestAddons/parallel/CSI (100.01s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (42.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-642200 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-642200 --alsologtostderr -v=1: (8.1056823s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-94b766c-pcwvq" [0353bed3-9aab-47d0-9211-3fe220c7da03] Pending
helpers_test.go:344: "headlamp-94b766c-pcwvq" [0353bed3-9aab-47d0-9211-3fe220c7da03] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-pcwvq" [0353bed3-9aab-47d0-9211-3fe220c7da03] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 34.1532469s
--- PASS: TestAddons/parallel/Headlamp (42.26s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-pj42c" [31d36220-e45b-4b51-acde-a0348c2720b6] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.1059315s
addons_test.go:859: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-642200
addons_test.go:859: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-642200: (2.1508289s)
--- PASS: TestAddons/parallel/CloudSpanner (7.31s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (93.89s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-642200 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-642200 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b93d9e14-b810-4482-9c84-aea81230ada4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b93d9e14-b810-4482-9c84-aea81230ada4] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b93d9e14-b810-4482-9c84-aea81230ada4] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.3513879s
addons_test.go:890: (dbg) Run:  kubectl --context addons-642200 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-642200 ssh "cat /opt/local-path-provisioner/pvc-7027fc0d-6ef8-4f73-8b2d-3da69e4cba14_default_test-pvc/file1"
addons_test.go:899: (dbg) Done: out/minikube-windows-amd64.exe -p addons-642200 ssh "cat /opt/local-path-provisioner/pvc-7027fc0d-6ef8-4f73-8b2d-3da69e4cba14_default_test-pvc/file1": (1.4417932s)
addons_test.go:911: (dbg) Run:  kubectl --context addons-642200 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-642200 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-642200 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-windows-amd64.exe -p addons-642200 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (36.807715s)
--- PASS: TestAddons/parallel/LocalPath (93.89s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.32s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-m4gks" [17e9d1f3-5819-48fd-8f64-4652374fb0d3] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.1927794s
addons_test.go:954: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-642200
addons_test.go:954: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-642200: (2.1260019s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-642200 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-642200 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.32s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (14.71s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-642200
addons_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-642200: (12.9984871s)
addons_test.go:175: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-642200
addons_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-642200
addons_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-642200
--- PASS: TestAddons/StoppedEnableDisable (14.71s)

                                                
                                    
x
+
TestCertOptions (109.07s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-940000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
E1011 18:59:33.660718    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-940000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (1m36.315336s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-940000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-940000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (1.2836367s)
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-940000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-940000 -- "sudo cat /etc/kubernetes/admin.conf": (1.2294152s)
helpers_test.go:175: Cleaning up "cert-options-940000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-940000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-940000: (10.0356048s)
--- PASS: TestCertOptions (109.07s)

                                                
                                    
x
+
TestCertExpiration (359.65s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-206700 --memory=2048 --cert-expiration=3m --driver=docker
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-206700 --memory=2048 --cert-expiration=3m --driver=docker: (2m4.257788s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-206700 --memory=2048 --cert-expiration=8760h --driver=docker
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-206700 --memory=2048 --cert-expiration=8760h --driver=docker: (41.927924s)
helpers_test.go:175: Cleaning up "cert-expiration-206700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-206700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-206700: (13.452761s)
--- PASS: TestCertExpiration (359.65s)

                                                
                                    
x
+
TestDockerFlags (89.65s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-068100 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-068100 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (1m14.594876s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-068100 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-068100 ssh "sudo systemctl show docker --property=Environment --no-pager": (1.403729s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-068100 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-068100 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (1.4479688s)
helpers_test.go:175: Cleaning up "docker-flags-068100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-068100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-068100: (12.1997421s)
--- PASS: TestDockerFlags (89.65s)

                                                
                                    
x
+
TestForceSystemdFlag (140.74s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-044100 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-044100 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (2m5.5469145s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-044100 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-044100 ssh "docker info --format {{.CgroupDriver}}": (1.8614512s)
helpers_test.go:175: Cleaning up "force-systemd-flag-044100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-044100
E1011 18:53:02.707487    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-044100: (13.3272956s)
--- PASS: TestForceSystemdFlag (140.74s)

                                                
                                    
x
+
TestForceSystemdEnv (114.2s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-769500 --memory=2048 --alsologtostderr -v=5 --driver=docker
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-769500 --memory=2048 --alsologtostderr -v=5 --driver=docker: (1m45.9843738s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-769500 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-769500 ssh "docker info --format {{.CgroupDriver}}": (1.7069616s)
helpers_test.go:175: Cleaning up "force-systemd-env-769500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-769500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-769500: (6.5130823s)
--- PASS: TestForceSystemdEnv (114.20s)

                                                
                                    
x
+
TestErrorSpam/start (4.31s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 start --dry-run: (1.4235464s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 start --dry-run: (1.3890491s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 start --dry-run: (1.4965894s)
--- PASS: TestErrorSpam/start (4.31s)

                                                
                                    
x
+
TestErrorSpam/status (4.91s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 status: (1.5264279s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 status: (1.4918848s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 status: (1.8886858s)
--- PASS: TestErrorSpam/status (4.91s)

                                                
                                    
x
+
TestErrorSpam/pause (5.18s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 pause: (2.61313s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 pause: (1.3112731s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 pause: (1.2497299s)
--- PASS: TestErrorSpam/pause (5.18s)

                                                
                                    
x
+
TestErrorSpam/unpause (4.82s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 unpause: (1.6338517s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 unpause: (1.8180428s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 unpause: (1.3652379s)
--- PASS: TestErrorSpam/unpause (4.82s)

                                                
                                    
x
+
TestErrorSpam/stop (22s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 stop: (12.7572246s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 stop: (4.6580871s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-149300 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-149300 stop: (4.585029s)
--- PASS: TestErrorSpam/stop (22.00s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\test\nested\copy\1556\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (91.81s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-420000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
E1011 18:04:33.653560    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
E1011 18:04:33.682704    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
E1011 18:04:33.701906    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
E1011 18:04:33.729024    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
E1011 18:04:33.771972    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
E1011 18:04:33.861074    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
E1011 18:04:34.024751    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
E1011 18:04:34.370470    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
E1011 18:04:35.013750    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
E1011 18:04:36.305636    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
E1011 18:04:38.866875    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
E1011 18:04:43.998081    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
E1011 18:04:54.247529    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
E1011 18:05:14.736681    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-420000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (1m31.7993912s)
--- PASS: TestFunctional/serial/StartWithProxy (91.81s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (44.49s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-420000 --alsologtostderr -v=8
E1011 18:05:55.710564    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-420000 --alsologtostderr -v=8: (44.4868534s)
functional_test.go:659: soft start took 44.4883333s for "functional-420000" cluster.
--- PASS: TestFunctional/serial/SoftStart (44.49s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.10s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-420000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (7.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 cache add registry.k8s.io/pause:3.1: (2.6941344s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 cache add registry.k8s.io/pause:3.3: (2.4289208s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 cache add registry.k8s.io/pause:latest: (2.5805939s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (7.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (4.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-420000 C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3875224422\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-420000 C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3875224422\001: (1.8283785s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 cache add minikube-local-cache-test:functional-420000
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 cache add minikube-local-cache-test:functional-420000: (1.7350987s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 cache delete minikube-local-cache-test:functional-420000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-420000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (4.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 ssh sudo crictl images: (1.3238371s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (5.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 ssh sudo docker rmi registry.k8s.io/pause:latest: (1.2418829s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-420000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (1.2480701s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 18:06:39.498464    2196 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 cache reload: (1.8784225s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (1.2281477s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (5.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.47s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 kubectl -- --context functional-420000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.49s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.88s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out\kubectl.exe --context functional-420000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.88s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (62.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-420000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1011 18:07:17.645137    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-420000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m2.153127s)
functional_test.go:757: restart took 1m2.153203s for "functional-420000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (62.15s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-420000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.26s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 logs: (2.6705741s)
--- PASS: TestFunctional/serial/LogsCmd (2.67s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.93s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2747710718\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2747710718\001\logs.txt: (2.9287319s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.93s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (7.17s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-420000 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-420000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-420000: exit status 115 (2.0580362s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31247 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 18:07:57.319065   10164 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_service_6bd82f1fe87f7552f02cc11dc4370801e3dafecc_2.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-420000 delete -f testdata\invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-420000 delete -f testdata\invalidsvc.yaml: (1.4943248s)
--- PASS: TestFunctional/serial/InvalidService (7.17s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (2.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-420000 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-420000 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.3220546s)

                                                
                                                
-- stdout --
	* [functional-420000] minikube v1.31.2 on Microsoft Windows 10 Enterprise N 10.0.19045.3570 Build 19045.3570
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 18:08:38.330513    8996 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1011 18:08:38.434353    8996 out.go:296] Setting OutFile to fd 896 ...
	I1011 18:08:38.437297    8996 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1011 18:08:38.437297    8996 out.go:309] Setting ErrFile to fd 828...
	I1011 18:08:38.437297    8996 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1011 18:08:38.472629    8996 out.go:303] Setting JSON to false
	I1011 18:08:38.477820    8996 start.go:128] hostinfo: {"hostname":"minikube2","uptime":1829,"bootTime":1697045888,"procs":146,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3570 Build 19045.3570","kernelVersion":"10.0.19045.3570 Build 19045.3570","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1011 18:08:38.478060    8996 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1011 18:08:38.482234    8996 out.go:177] * [functional-420000] minikube v1.31.2 on Microsoft Windows 10 Enterprise N 10.0.19045.3570 Build 19045.3570
	I1011 18:08:38.486079    8996 notify.go:220] Checking for updates...
	I1011 18:08:38.488980    8996 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1011 18:08:38.492083    8996 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 18:08:38.495557    8996 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1011 18:08:38.498676    8996 out.go:177]   - MINIKUBE_LOCATION=17402
	I1011 18:08:38.502042    8996 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 18:08:38.505813    8996 config.go:182] Loaded profile config "functional-420000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1011 18:08:38.507893    8996 driver.go:378] Setting default libvirt URI to qemu:///system
	I1011 18:08:38.904274    8996 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.1 (123237)
	I1011 18:08:38.916131    8996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 18:08:39.360788    8996 info.go:266] docker info: {ID:fddc6918-7749-4ebe-a6e7-06311fb56dc1 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:71 SystemTime:2023-10-11 18:08:39.2976504 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:
Docker Inc. Version:v0.2.20] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1011 18:08:39.365952    8996 out.go:177] * Using the docker driver based on existing profile
	I1011 18:08:39.368232    8996 start.go:298] selected driver: docker
	I1011 18:08:39.368232    8996 start.go:902] validating driver "docker" against &{Name:functional-420000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-420000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1011 18:08:39.368232    8996 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 18:08:39.420056    8996 out.go:177] 
	W1011 18:08:39.423068    8996 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1011 18:08:39.427305    8996 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-420000 --dry-run --alsologtostderr -v=1 --driver=docker
functional_test.go:987: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-420000 --dry-run --alsologtostderr -v=1 --driver=docker: (1.5508592s)
--- PASS: TestFunctional/parallel/DryRun (2.87s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-420000 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-420000 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.5257932s)

                                                
                                                
-- stdout --
	* [functional-420000] minikube v1.31.2 sur Microsoft Windows 10 Enterprise N 10.0.19045.3570 Build 19045.3570
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 18:08:36.813204    9492 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1011 18:08:36.912170    9492 out.go:296] Setting OutFile to fd 860 ...
	I1011 18:08:36.912713    9492 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1011 18:08:36.912713    9492 out.go:309] Setting ErrFile to fd 692...
	I1011 18:08:36.912713    9492 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1011 18:08:36.934043    9492 out.go:303] Setting JSON to false
	I1011 18:08:36.937420    9492 start.go:128] hostinfo: {"hostname":"minikube2","uptime":1828,"bootTime":1697045888,"procs":146,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3570 Build 19045.3570","kernelVersion":"10.0.19045.3570 Build 19045.3570","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W1011 18:08:36.937643    9492 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1011 18:08:36.944738    9492 out.go:177] * [functional-420000] minikube v1.31.2 sur Microsoft Windows 10 Enterprise N 10.0.19045.3570 Build 19045.3570
	I1011 18:08:36.948830    9492 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I1011 18:08:36.948213    9492 notify.go:220] Checking for updates...
	I1011 18:08:36.951617    9492 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 18:08:36.955042    9492 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I1011 18:08:36.957829    9492 out.go:177]   - MINIKUBE_LOCATION=17402
	I1011 18:08:36.960721    9492 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 18:08:36.964258    9492 config.go:182] Loaded profile config "functional-420000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1011 18:08:36.966515    9492 driver.go:378] Setting default libvirt URI to qemu:///system
	I1011 18:08:37.339598    9492 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.1 (123237)
	I1011 18:08:37.357555    9492 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 18:08:37.910536    9492 info.go:266] docker info: {ID:fddc6918-7749-4ebe-a6e7-06311fb56dc1 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:71 SystemTime:2023-10-11 18:08:37.8436243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:
Docker Inc. Version:v0.2.20] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1011 18:08:37.916802    9492 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1011 18:08:37.919302    9492 start.go:298] selected driver: docker
	I1011 18:08:37.919302    9492 start.go:902] validating driver "docker" against &{Name:functional-420000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-420000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1011 18:08:37.919302    9492 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 18:08:38.081304    9492 out.go:177] 
	W1011 18:08:38.084593    9492 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1011 18:08:38.087026    9492 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (5.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 status: (1.5874541s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (1.5816137s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 status -o json: (2.6135532s)
--- PASS: TestFunctional/parallel/StatusCmd (5.78s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (85.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0acb8031-7db6-49b9-be70-981bf3ef948f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0909877s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-420000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-420000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-420000 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-420000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-420000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d8182436-4c55-4716-8742-88d5da1e15b0] Pending
helpers_test.go:344: "sp-pod" [d8182436-4c55-4716-8742-88d5da1e15b0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d8182436-4c55-4716-8742-88d5da1e15b0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 1m2.0711077s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-420000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-420000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-420000 delete -f testdata/storage-provisioner/pod.yaml: (1.8482797s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-420000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [69c1e163-6b1e-429e-8fa8-c193f53abbdd] Pending
helpers_test.go:344: "sp-pod" [69c1e163-6b1e-429e-8fa8-c193f53abbdd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [69c1e163-6b1e-429e-8fa8-c193f53abbdd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.0824373s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-420000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (85.12s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (3.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 ssh "echo hello"
functional_test.go:1724: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 ssh "echo hello": (1.6815942s)
functional_test.go:1741: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 ssh "cat /etc/hostname"
functional_test.go:1741: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 ssh "cat /etc/hostname": (1.7377358s)
--- PASS: TestFunctional/parallel/SSHCmd (3.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (6.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 cp testdata\cp-test.txt /home/docker/cp-test.txt: (1.5142109s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 ssh -n functional-420000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 ssh -n functional-420000 "sudo cat /home/docker/cp-test.txt": (1.936019s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 cp functional-420000:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalparallelCpCmd1777761937\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 cp functional-420000:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalparallelCpCmd1777761937\001\cp-test.txt: (1.7623241s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 ssh -n functional-420000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 ssh -n functional-420000 "sudo cat /home/docker/cp-test.txt": (1.6125933s)
--- PASS: TestFunctional/parallel/CpCmd (6.83s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (110s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-420000 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-rhns2" [1c3bdd35-04d9-4964-97bb-20b2c1fced4e] Pending
helpers_test.go:344: "mysql-859648c796-rhns2" [1c3bdd35-04d9-4964-97bb-20b2c1fced4e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-rhns2" [1c3bdd35-04d9-4964-97bb-20b2c1fced4e] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m33.0942891s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-420000 exec mysql-859648c796-rhns2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-420000 exec mysql-859648c796-rhns2 -- mysql -ppassword -e "show databases;": exit status 1 (335.0484ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-420000 exec mysql-859648c796-rhns2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-420000 exec mysql-859648c796-rhns2 -- mysql -ppassword -e "show databases;": exit status 1 (421.7667ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-420000 exec mysql-859648c796-rhns2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-420000 exec mysql-859648c796-rhns2 -- mysql -ppassword -e "show databases;": exit status 1 (817.7383ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-420000 exec mysql-859648c796-rhns2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-420000 exec mysql-859648c796-rhns2 -- mysql -ppassword -e "show databases;": exit status 1 (459.8177ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-420000 exec mysql-859648c796-rhns2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-420000 exec mysql-859648c796-rhns2 -- mysql -ppassword -e "show databases;": exit status 1 (419.9576ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-420000 exec mysql-859648c796-rhns2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (110.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1556/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 ssh "sudo cat /etc/test/nested/copy/1556/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 ssh "sudo cat /etc/test/nested/copy/1556/hosts": (1.4103275s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (8.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1556.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 ssh "sudo cat /etc/ssl/certs/1556.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 ssh "sudo cat /etc/ssl/certs/1556.pem": (1.5019314s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1556.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 ssh "sudo cat /usr/share/ca-certificates/1556.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 ssh "sudo cat /usr/share/ca-certificates/1556.pem": (1.407552s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 ssh "sudo cat /etc/ssl/certs/51391683.0": (1.4873946s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/15562.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 ssh "sudo cat /etc/ssl/certs/15562.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 ssh "sudo cat /etc/ssl/certs/15562.pem": (1.3175971s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/15562.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 ssh "sudo cat /usr/share/ca-certificates/15562.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 ssh "sudo cat /usr/share/ca-certificates/15562.pem": (1.4168215s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (1.5650779s)
--- PASS: TestFunctional/parallel/CertSync (8.70s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-420000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-420000 ssh "sudo systemctl is-active crio": exit status 1 (1.8581532s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 18:08:44.380342    4896 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (3.057556s)
--- PASS: TestFunctional/parallel/License (3.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-420000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-420000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-420000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 8580: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 7768: OpenProcess: The parameter is incorrect.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-420000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-420000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (22.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-420000 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [8b6624f9-69bd-46ac-b235-4b49d72e7f0c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [8b6624f9-69bd-46ac-b235-4b49d72e7f0c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 21.0936381s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (22.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (33.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-420000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-420000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-rlxww" [73c3bcc4-f871-4279-a1fa-ac5d94a12bfa] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-rlxww" [73c3bcc4-f871-4279-a1fa-ac5d94a12bfa] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 32.1999938s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (33.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-420000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-420000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 9564: TerminateProcess: Access is denied.
helpers_test.go:508: unable to kill pid 8808: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1274: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.67589s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1309: (dbg) Done: out/minikube-windows-amd64.exe profile list: (1.4995737s)
functional_test.go:1314: Took "1.4999099s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1328: Took "230.0349ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1360: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (1.5711142s)
functional_test.go:1365: Took "1.5711731s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1378: Took "225.5969ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 service list
functional_test.go:1458: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 service list: (2.1379469s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 service list -o json: (2.2753373s)
functional_test.go:1493: Took "2.2754551s" to run "out/minikube-windows-amd64.exe -p functional-420000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (2.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 version -o=json --components: (2.8034303s)
--- PASS: TestFunctional/parallel/Version/components (2.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-420000 service --namespace=default --https --url hello-node: exit status 1 (15.0196719s)

                                                
                                                
-- stdout --
	https://127.0.0.1:49422

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 18:08:45.508187    6388 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1521: found endpoint: https://127.0.0.1:49422
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 image ls --format short --alsologtostderr: (1.7920991s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-420000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-420000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-420000
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-420000 image ls --format short --alsologtostderr:
W1011 18:09:53.515739     936 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I1011 18:09:53.644305     936 out.go:296] Setting OutFile to fd 664 ...
I1011 18:09:53.645528     936 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1011 18:09:53.645528     936 out.go:309] Setting ErrFile to fd 584...
I1011 18:09:53.645528     936 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1011 18:09:53.668197     936 config.go:182] Loaded profile config "functional-420000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1011 18:09:53.668758     936 config.go:182] Loaded profile config "functional-420000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1011 18:09:53.695100     936 cli_runner.go:164] Run: docker container inspect functional-420000 --format={{.State.Status}}
I1011 18:09:53.994767     936 ssh_runner.go:195] Run: systemctl --version
I1011 18:09:54.017851     936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-420000
I1011 18:09:54.295143     936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65263 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-420000\id_rsa Username:docker}
I1011 18:09:54.775706     936 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 image ls --format table --alsologtostderr: (1.1865157s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-420000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.28.2           | c120fed2beb84 | 73.1MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.28.2           | cdcab12b2dd16 | 126MB  |
| registry.k8s.io/kube-scheduler              | v1.28.2           | 7a5d9d67a13f6 | 60.1MB |
| registry.k8s.io/kube-controller-manager     | v1.28.2           | 55f13c92defb1 | 122MB  |
| gcr.io/google-containers/addon-resizer      | functional-420000 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/nginx                     | alpine            | d571254277f6a | 42.6MB |
| docker.io/library/nginx                     | latest            | 61395b4c586da | 187MB  |
| docker.io/library/minikube-local-cache-test | functional-420000 | d9478aba330ed | 30B    |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-420000 image ls --format table --alsologtostderr:
W1011 18:09:56.841122    9752 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I1011 18:09:56.929726    9752 out.go:296] Setting OutFile to fd 632 ...
I1011 18:09:56.930909    9752 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1011 18:09:56.930909    9752 out.go:309] Setting ErrFile to fd 996...
I1011 18:09:56.930909    9752 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1011 18:09:56.949897    9752 config.go:182] Loaded profile config "functional-420000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1011 18:09:56.950621    9752 config.go:182] Loaded profile config "functional-420000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1011 18:09:56.968746    9752 cli_runner.go:164] Run: docker container inspect functional-420000 --format={{.State.Status}}
I1011 18:09:57.205351    9752 ssh_runner.go:195] Run: systemctl --version
I1011 18:09:57.212190    9752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-420000
I1011 18:09:57.434358    9752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65263 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-420000\id_rsa Username:docker}
I1011 18:09:57.662420    9752 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
E1011 18:10:01.501759    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 image ls --format json --alsologtostderr: (1.4948336s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-420000 image ls --format json --alsologtostderr:
[{"id":"d9478aba330edc415b6b044d4bfa1b257304293d042df66a6b8d8f370cd86e07","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-420000"],"size":"30"},{"id":"c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"73100000"},{"id":"7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"60100000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"126000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d
867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-420000"],"size":"32900000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"d571254277f6a0ba9d0c4a08f29b94476dcd4a95275bd484ece060ee4ff847e4","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"122000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"r
epoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-420000 image ls --format json --alsologtostderr:
W1011 18:09:55.354680     720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I1011 18:09:55.433424     720 out.go:296] Setting OutFile to fd 716 ...
I1011 18:09:55.434019     720 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1011 18:09:55.434103     720 out.go:309] Setting ErrFile to fd 984...
I1011 18:09:55.434152     720 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1011 18:09:55.453134     720 config.go:182] Loaded profile config "functional-420000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1011 18:09:55.453743     720 config.go:182] Loaded profile config "functional-420000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1011 18:09:55.473792     720 cli_runner.go:164] Run: docker container inspect functional-420000 --format={{.State.Status}}
I1011 18:09:55.711997     720 ssh_runner.go:195] Run: systemctl --version
I1011 18:09:55.720093     720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-420000
I1011 18:09:55.927231     720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65263 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-420000\id_rsa Username:docker}
I1011 18:09:56.462860     720 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 image ls --format yaml --alsologtostderr: (1.8704344s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-420000 image ls --format yaml --alsologtostderr:
- id: cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "126000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-420000
size: "32900000"
- id: d9478aba330edc415b6b044d4bfa1b257304293d042df66a6b8d8f370cd86e07
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-420000
size: "30"
- id: 55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "122000000"
- id: 7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "60100000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: d571254277f6a0ba9d0c4a08f29b94476dcd4a95275bd484ece060ee4ff847e4
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "73100000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-420000 image ls --format yaml --alsologtostderr:
W1011 18:09:53.516688    9424 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I1011 18:09:53.646099    9424 out.go:296] Setting OutFile to fd 924 ...
I1011 18:09:53.667489    9424 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1011 18:09:53.667551    9424 out.go:309] Setting ErrFile to fd 628...
I1011 18:09:53.667605    9424 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1011 18:09:53.692166    9424 config.go:182] Loaded profile config "functional-420000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1011 18:09:53.692877    9424 config.go:182] Loaded profile config "functional-420000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1011 18:09:53.725683    9424 cli_runner.go:164] Run: docker container inspect functional-420000 --format={{.State.Status}}
I1011 18:09:54.021704    9424 ssh_runner.go:195] Run: systemctl --version
I1011 18:09:54.036810    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-420000
I1011 18:09:54.313237    9424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65263 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-420000\id_rsa Username:docker}
I1011 18:09:54.775392    9424 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (12.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-420000 ssh pgrep buildkitd: exit status 1 (1.4744499s)

                                                
                                                
** stderr ** 
	W1011 18:09:55.285535    2792 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 image build -t localhost/my-image:functional-420000 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 image build -t localhost/my-image:functional-420000 testdata\build --alsologtostderr: (9.6125946s)
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-420000 image build -t localhost/my-image:functional-420000 testdata\build --alsologtostderr:
W1011 18:09:56.738798    8992 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I1011 18:09:56.827597    8992 out.go:296] Setting OutFile to fd 804 ...
I1011 18:09:56.847339    8992 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1011 18:09:56.847339    8992 out.go:309] Setting ErrFile to fd 588...
I1011 18:09:56.847339    8992 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1011 18:09:56.866201    8992 config.go:182] Loaded profile config "functional-420000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1011 18:09:56.885444    8992 config.go:182] Loaded profile config "functional-420000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1011 18:09:56.910748    8992 cli_runner.go:164] Run: docker container inspect functional-420000 --format={{.State.Status}}
I1011 18:09:57.169495    8992 ssh_runner.go:195] Run: systemctl --version
I1011 18:09:57.176956    8992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-420000
I1011 18:09:57.382865    8992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65263 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-420000\id_rsa Username:docker}
I1011 18:09:57.753165    8992 build_images.go:151] Building image from path: C:\Users\jenkins.minikube2\AppData\Local\Temp\build.1738029456.tar
I1011 18:09:57.769448    8992 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1011 18:09:57.965865    8992 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1738029456.tar
I1011 18:09:57.977217    8992 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1738029456.tar: stat -c "%s %y" /var/lib/minikube/build/build.1738029456.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1738029456.tar': No such file or directory
I1011 18:09:57.977217    8992 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\AppData\Local\Temp\build.1738029456.tar --> /var/lib/minikube/build/build.1738029456.tar (3072 bytes)
I1011 18:09:58.094691    8992 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1738029456
I1011 18:09:58.162238    8992 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1738029456 -xf /var/lib/minikube/build/build.1738029456.tar
I1011 18:09:58.187336    8992 docker.go:341] Building image: /var/lib/minikube/build/build.1738029456
I1011 18:09:58.194899    8992 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-420000 /var/lib/minikube/build/build.1738029456
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load .dockerignore
#1 transferring context: 0.1s
#1 transferring context: 2B 0.1s done
#1 DONE 0.4s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.5s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.9s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 ...

                                                
                                                
#5 [internal] load build context
#5 transferring context: 62B done
#5 DONE 0.2s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.2s done
#4 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#4 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#4 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#4 DONE 1.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 3.0s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.3s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.3s done
#8 writing image sha256:7a21c5be18cafb546b18e83b770f7b730febd5e27716c44390f342b61020a8fd 0.0s done
#8 naming to localhost/my-image:functional-420000 0.1s done
#8 DONE 0.3s
I1011 18:10:05.946645    8992 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-420000 /var/lib/minikube/build/build.1738029456: (7.7515571s)
I1011 18:10:05.960719    8992 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1738029456
I1011 18:10:06.072632    8992 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1738029456.tar
I1011 18:10:06.161386    8992 build_images.go:207] Built localhost/my-image:functional-420000 from C:\Users\jenkins.minikube2\AppData\Local\Temp\build.1738029456.tar
I1011 18:10:06.161386    8992 build_images.go:123] succeeded building to: functional-420000
I1011 18:10:06.161386    8992 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 image ls: (1.0891137s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (12.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.9474355s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-420000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (18.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 image load --daemon gcr.io/google-containers/addon-resizer:functional-420000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 image load --daemon gcr.io/google-containers/addon-resizer:functional-420000 --alsologtostderr: (17.3326482s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (18.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-420000 service hello-node --url --format={{.IP}}: exit status 1 (15.0252342s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 18:09:00.523956    3192 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 image load --daemon gcr.io/google-containers/addon-resizer:functional-420000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 image load --daemon gcr.io/google-containers/addon-resizer:functional-420000 --alsologtostderr: (4.4115422s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (13.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (4.121095s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-420000
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 image load --daemon gcr.io/google-containers/addon-resizer:functional-420000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 image load --daemon gcr.io/google-containers/addon-resizer:functional-420000 --alsologtostderr: (8.4998182s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (13.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-420000 service hello-node --url: exit status 1 (15.0344064s)

                                                
                                                
-- stdout --
	http://127.0.0.1:49460

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 18:09:15.571070    7644 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1564: found endpoint for hello-node: http://127.0.0.1:49460
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 image save gcr.io/google-containers/addon-resizer:functional-420000 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 image save gcr.io/google-containers/addon-resizer:functional-420000 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar --alsologtostderr: (4.579381s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.58s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (10.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-420000 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-420000"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-420000 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-420000": (6.214212s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-420000 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-420000 docker-env | Invoke-Expression ; docker images": (4.14574s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (10.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 image rm gcr.io/google-containers/addon-resizer:functional-420000 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 image rm gcr.io/google-containers/addon-resizer:functional-420000 --alsologtostderr: (1.1607043s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 image ls
E1011 18:09:33.647054    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (9.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar --alsologtostderr: (8.0946572s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 image ls: (1.5081605s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (9.60s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 update-context --alsologtostderr -v=2: (1.0920109s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-420000
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-420000 image save --daemon gcr.io/google-containers/addon-resizer:functional-420000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-420000 image save --daemon gcr.io/google-containers/addon-resizer:functional-420000 --alsologtostderr: (8.8665702s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-420000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.42s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (1.58s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-420000
functional_test.go:189: (dbg) Done: docker rmi -f gcr.io/google-containers/addon-resizer:functional-420000: (1.2838093s)
--- PASS: TestFunctional/delete_addon-resizer_images (1.58s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-420000
--- PASS: TestFunctional/delete_my-image_image (0.20s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.19s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-420000
--- PASS: TestFunctional/delete_minikube_cached_images (0.19s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (74.32s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-038800 --driver=docker
E1011 18:14:33.641202    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-038800 --driver=docker: (1m14.3186234s)
--- PASS: TestImageBuild/serial/Setup (74.32s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (4.21s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-038800
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-038800: (4.2136564s)
--- PASS: TestImageBuild/serial/NormalBuild (4.21s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (2.58s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-038800
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-038800: (2.5805844s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (2.58s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (3.58s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-038800
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-038800: (3.5756187s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (3.58s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (2.48s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-038800
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-038800: (2.4812517s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (2.48s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (131.45s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-860400 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-860400 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker: (2m11.4533257s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (131.45s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (51.55s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-860400 addons enable ingress --alsologtostderr -v=5
E1011 18:18:02.711968    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
E1011 18:18:02.726294    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
E1011 18:18:02.741758    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
E1011 18:18:02.765176    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
E1011 18:18:02.818562    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
E1011 18:18:02.913325    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
E1011 18:18:03.083505    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
E1011 18:18:03.409083    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
E1011 18:18:04.063585    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
E1011 18:18:05.353455    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
E1011 18:18:07.922276    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
E1011 18:18:13.693464    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
E1011 18:18:23.946755    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-860400 addons enable ingress --alsologtostderr -v=5: (51.5530043s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (51.55s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (1.94s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-860400 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-860400 addons enable ingress-dns --alsologtostderr -v=5: (1.9393832s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (1.94s)

                                                
                                    
x
+
TestJSONOutput/start/Command (87.63s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-685500 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
E1011 18:19:33.643674    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
E1011 18:20:47.337232    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
E1011 18:20:56.868575    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-685500 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (1m27.6280171s)
--- PASS: TestJSONOutput/start/Command (87.63s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-685500 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-685500 --output=json --user=testUser: (1.677836s)
--- PASS: TestJSONOutput/pause/Command (1.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.54s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-685500 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-685500 --output=json --user=testUser: (1.537158s)
--- PASS: TestJSONOutput/unpause/Command (1.54s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.02s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-685500 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-685500 --output=json --user=testUser: (13.0149472s)
--- PASS: TestJSONOutput/stop/Command (13.02s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.37s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-828700 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-828700 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (264.1541ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cdcd8d97-4f6f-4c3b-9049-c49c50d73729","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-828700] minikube v1.31.2 on Microsoft Windows 10 Enterprise N 10.0.19045.3570 Build 19045.3570","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"40be3700-41f4-400e-8c4f-73c70034dc5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"b92cbf58-f06f-4e56-ab5e-7dd0f8e5ea12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"76e21fd6-a34a-4af6-96ab-70acd758ee1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"1c73f22d-5416-44b0-b159-263730047db7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17402"}}
	{"specversion":"1.0","id":"f1f87d49-bd29-467e-80a0-a377e0b4ac26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"023418d5-a734-445a-8fa0-87872d597342","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 18:21:17.939938    8688 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-828700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-828700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-828700: (1.1097913s)
--- PASS: TestErrorJSONOutput (1.37s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (81.85s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-573900 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-573900 --network=: (1m16.8866429s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-573900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-573900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-573900: (4.7736283s)
--- PASS: TestKicCustomNetwork/create_custom_network (81.85s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (78.74s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-372900 --network=bridge
E1011 18:23:02.705683    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
E1011 18:23:27.148917    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
E1011 18:23:27.164102    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
E1011 18:23:27.179403    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
E1011 18:23:27.211213    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
E1011 18:23:27.258415    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
E1011 18:23:27.351712    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
E1011 18:23:27.523815    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
E1011 18:23:27.849345    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
E1011 18:23:28.503087    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
E1011 18:23:29.796473    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
E1011 18:23:31.193139    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
E1011 18:23:32.364047    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
E1011 18:23:37.488821    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
E1011 18:23:47.731045    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-372900 --network=bridge: (1m13.8124293s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-372900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-372900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-372900: (4.7423161s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (78.74s)

                                                
                                    
x
+
TestKicExistingNetwork (82.53s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-557700 --network=existing-network
E1011 18:24:08.225203    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
E1011 18:24:33.652987    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
E1011 18:24:49.192030    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-557700 --network=existing-network: (1m16.6525884s)
helpers_test.go:175: Cleaning up "existing-network-557700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-557700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-557700: (4.6771497s)
--- PASS: TestKicExistingNetwork (82.53s)

                                                
                                    
x
+
TestKicCustomSubnet (83.87s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-253100 --subnet=192.168.60.0/24
E1011 18:26:11.122506    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-253100 --subnet=192.168.60.0/24: (1m18.9634293s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-253100 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-253100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-253100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-253100: (4.7228594s)
--- PASS: TestKicCustomSubnet (83.87s)

                                                
                                    
x
+
TestKicStaticIP (81.73s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p static-ip-398000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p static-ip-398000 --static-ip=192.168.200.200: (1m15.7927598s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe -p static-ip-398000 ip
helpers_test.go:175: Cleaning up "static-ip-398000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p static-ip-398000
E1011 18:28:02.714274    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p static-ip-398000: (5.2815406s)
--- PASS: TestKicStaticIP (81.73s)

                                                
                                    
x
+
TestMainNoArgs (0.22s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.22s)

                                                
                                    
x
+
TestMinikubeProfile (156.98s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-866600 --driver=docker
E1011 18:28:27.139222    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
E1011 18:28:54.966725    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-866600 --driver=docker: (1m14.8360601s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-866600 --driver=docker
E1011 18:29:33.646791    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-866600 --driver=docker: (1m6.1053462s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-866600
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (2.1728142s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-866600
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (2.1943216s)
helpers_test.go:175: Cleaning up "second-866600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-866600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-866600: (5.0223185s)
helpers_test.go:175: Cleaning up "first-866600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-866600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-866600: (5.798216s)
--- PASS: TestMinikubeProfile (156.98s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.8s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-235500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-235500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (20.7922373s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (1.09s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-235500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-235500 ssh -- ls /minikube-host: (1.0869916s)
--- PASS: TestMountStart/serial/VerifyMountFirst (1.09s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (19.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-235500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-235500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (18.4924062s)
--- PASS: TestMountStart/serial/StartWithMountSecond (19.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (1.14s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-235500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-235500 ssh -- ls /minikube-host: (1.1437924s)
--- PASS: TestMountStart/serial/VerifyMountSecond (1.14s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (4.23s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-235500 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-235500 --alsologtostderr -v=5: (4.2267388s)
--- PASS: TestMountStart/serial/DeleteFirst (4.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (1.12s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-235500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-235500 ssh -- ls /minikube-host: (1.1216886s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (1.12s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.61s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-235500
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-235500: (2.6134621s)
--- PASS: TestMountStart/serial/Stop (2.61s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (13.57s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-235500
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-235500: (12.560895s)
--- PASS: TestMountStart/serial/RestartStopped (13.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (1.14s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-235500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-235500 ssh -- ls /minikube-host: (1.1350815s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (1.14s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (160.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-080800 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
E1011 18:33:02.718869    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
E1011 18:33:27.141587    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
E1011 18:34:26.566348    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
E1011 18:34:33.651245    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
multinode_test.go:85: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-080800 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (2m37.5728025s)
multinode_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 status --alsologtostderr
multinode_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 status --alsologtostderr: (2.7935062s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (160.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (25.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-080800 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-080800 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-080800 -- rollout status deployment/busybox: (17.9313013s)
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-080800 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-080800 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-080800 -- exec busybox-5bc68d56bd-rvjwx -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-080800 -- exec busybox-5bc68d56bd-rvjwx -- nslookup kubernetes.io: (2.001235s)
multinode_test.go:524: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-080800 -- exec busybox-5bc68d56bd-v42zc -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-080800 -- exec busybox-5bc68d56bd-v42zc -- nslookup kubernetes.io: (1.6439202s)
multinode_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-080800 -- exec busybox-5bc68d56bd-rvjwx -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-080800 -- exec busybox-5bc68d56bd-v42zc -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-080800 -- exec busybox-5bc68d56bd-rvjwx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-080800 -- exec busybox-5bc68d56bd-v42zc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (25.63s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (2.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-080800 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-080800 -- exec busybox-5bc68d56bd-rvjwx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-080800 -- exec busybox-5bc68d56bd-rvjwx -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:560: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-080800 -- exec busybox-5bc68d56bd-v42zc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-080800 -- exec busybox-5bc68d56bd-v42zc -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (2.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-080800 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-080800 -v 3 --alsologtostderr: (55.3277545s)
multinode_test.go:116: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 status --alsologtostderr
multinode_test.go:116: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 status --alsologtostderr: (2.9728824s)
--- PASS: TestMultiNode/serial/AddNode (58.30s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.2414503s)
--- PASS: TestMultiNode/serial/ProfileList (1.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (41s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 status --output json --alsologtostderr: (2.8999465s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 cp testdata\cp-test.txt multinode-080800:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 cp testdata\cp-test.txt multinode-080800:/home/docker/cp-test.txt: (1.1546619s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800 "sudo cat /home/docker/cp-test.txt": (1.1583155s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 cp multinode-080800:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile1308356581\001\cp-test_multinode-080800.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 cp multinode-080800:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile1308356581\001\cp-test_multinode-080800.txt: (1.1202846s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800 "sudo cat /home/docker/cp-test.txt": (1.1917471s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 cp multinode-080800:/home/docker/cp-test.txt multinode-080800-m02:/home/docker/cp-test_multinode-080800_multinode-080800-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 cp multinode-080800:/home/docker/cp-test.txt multinode-080800-m02:/home/docker/cp-test_multinode-080800_multinode-080800-m02.txt: (1.7105835s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800 "sudo cat /home/docker/cp-test.txt": (1.1846522s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800-m02 "sudo cat /home/docker/cp-test_multinode-080800_multinode-080800-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800-m02 "sudo cat /home/docker/cp-test_multinode-080800_multinode-080800-m02.txt": (1.1688511s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 cp multinode-080800:/home/docker/cp-test.txt multinode-080800-m03:/home/docker/cp-test_multinode-080800_multinode-080800-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 cp multinode-080800:/home/docker/cp-test.txt multinode-080800-m03:/home/docker/cp-test_multinode-080800_multinode-080800-m03.txt: (1.7181695s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800 "sudo cat /home/docker/cp-test.txt": (1.1523022s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800-m03 "sudo cat /home/docker/cp-test_multinode-080800_multinode-080800-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800-m03 "sudo cat /home/docker/cp-test_multinode-080800_multinode-080800-m03.txt": (1.1582077s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 cp testdata\cp-test.txt multinode-080800-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 cp testdata\cp-test.txt multinode-080800-m02:/home/docker/cp-test.txt: (1.1683925s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800-m02 "sudo cat /home/docker/cp-test.txt": (1.1372725s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 cp multinode-080800-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile1308356581\001\cp-test_multinode-080800-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 cp multinode-080800-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile1308356581\001\cp-test_multinode-080800-m02.txt: (1.1225841s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800-m02 "sudo cat /home/docker/cp-test.txt": (1.1397392s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 cp multinode-080800-m02:/home/docker/cp-test.txt multinode-080800:/home/docker/cp-test_multinode-080800-m02_multinode-080800.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 cp multinode-080800-m02:/home/docker/cp-test.txt multinode-080800:/home/docker/cp-test_multinode-080800-m02_multinode-080800.txt: (1.7271162s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800-m02 "sudo cat /home/docker/cp-test.txt": (1.1794565s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800 "sudo cat /home/docker/cp-test_multinode-080800-m02_multinode-080800.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800 "sudo cat /home/docker/cp-test_multinode-080800-m02_multinode-080800.txt": (1.1283424s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 cp multinode-080800-m02:/home/docker/cp-test.txt multinode-080800-m03:/home/docker/cp-test_multinode-080800-m02_multinode-080800-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 cp multinode-080800-m02:/home/docker/cp-test.txt multinode-080800-m03:/home/docker/cp-test_multinode-080800-m02_multinode-080800-m03.txt: (1.7203389s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800-m02 "sudo cat /home/docker/cp-test.txt": (1.1611662s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800-m03 "sudo cat /home/docker/cp-test_multinode-080800-m02_multinode-080800-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800-m03 "sudo cat /home/docker/cp-test_multinode-080800-m02_multinode-080800-m03.txt": (1.1376193s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 cp testdata\cp-test.txt multinode-080800-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 cp testdata\cp-test.txt multinode-080800-m03:/home/docker/cp-test.txt: (1.1587342s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800-m03 "sudo cat /home/docker/cp-test.txt": (1.1202142s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 cp multinode-080800-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile1308356581\001\cp-test_multinode-080800-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 cp multinode-080800-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile1308356581\001\cp-test_multinode-080800-m03.txt: (1.163073s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800-m03 "sudo cat /home/docker/cp-test.txt": (1.214775s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 cp multinode-080800-m03:/home/docker/cp-test.txt multinode-080800:/home/docker/cp-test_multinode-080800-m03_multinode-080800.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 cp multinode-080800-m03:/home/docker/cp-test.txt multinode-080800:/home/docker/cp-test_multinode-080800-m03_multinode-080800.txt: (1.791038s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800-m03 "sudo cat /home/docker/cp-test.txt": (1.1257512s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800 "sudo cat /home/docker/cp-test_multinode-080800-m03_multinode-080800.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800 "sudo cat /home/docker/cp-test_multinode-080800-m03_multinode-080800.txt": (1.204128s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 cp multinode-080800-m03:/home/docker/cp-test.txt multinode-080800-m02:/home/docker/cp-test_multinode-080800-m03_multinode-080800-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 cp multinode-080800-m03:/home/docker/cp-test.txt multinode-080800-m02:/home/docker/cp-test_multinode-080800-m03_multinode-080800-m02.txt: (1.6800349s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800-m03 "sudo cat /home/docker/cp-test.txt": (1.1401953s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800-m02 "sudo cat /home/docker/cp-test_multinode-080800-m03_multinode-080800-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 ssh -n multinode-080800-m02 "sudo cat /home/docker/cp-test_multinode-080800-m03_multinode-080800-m02.txt": (1.1537218s)
--- PASS: TestMultiNode/serial/CopyFile (41.00s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (6.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 node stop m03: (2.2373046s)
multinode_test.go:216: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-080800 status: exit status 7 (2.333172s)

                                                
                                                
-- stdout --
	multinode-080800
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-080800-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-080800-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 18:36:48.389650    8116 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:223: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-080800 status --alsologtostderr: exit status 7 (2.2347665s)

                                                
                                                
-- stdout --
	multinode-080800
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-080800-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-080800-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 18:36:50.713211    5492 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1011 18:36:50.781978    5492 out.go:296] Setting OutFile to fd 840 ...
	I1011 18:36:50.782909    5492 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1011 18:36:50.782909    5492 out.go:309] Setting ErrFile to fd 972...
	I1011 18:36:50.782909    5492 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1011 18:36:50.792900    5492 out.go:303] Setting JSON to false
	I1011 18:36:50.792900    5492 mustload.go:65] Loading cluster: multinode-080800
	I1011 18:36:50.792900    5492 notify.go:220] Checking for updates...
	I1011 18:36:50.794656    5492 config.go:182] Loaded profile config "multinode-080800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1011 18:36:50.794656    5492 status.go:255] checking status of multinode-080800 ...
	I1011 18:36:50.809496    5492 cli_runner.go:164] Run: docker container inspect multinode-080800 --format={{.State.Status}}
	I1011 18:36:50.966929    5492 status.go:330] multinode-080800 host status = "Running" (err=<nil>)
	I1011 18:36:50.966929    5492 host.go:66] Checking if "multinode-080800" exists ...
	I1011 18:36:50.974773    5492 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-080800
	I1011 18:36:51.156968    5492 host.go:66] Checking if "multinode-080800" exists ...
	I1011 18:36:51.173053    5492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 18:36:51.181128    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-080800
	I1011 18:36:51.358722    5492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50564 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\multinode-080800\id_rsa Username:docker}
	I1011 18:36:51.511173    5492 ssh_runner.go:195] Run: systemctl --version
	I1011 18:36:51.536200    5492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 18:36:51.572354    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-080800
	I1011 18:36:51.749967    5492 kubeconfig.go:92] found "multinode-080800" server: "https://127.0.0.1:50568"
	I1011 18:36:51.749967    5492 api_server.go:166] Checking apiserver status ...
	I1011 18:36:51.762883    5492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 18:36:51.802914    5492 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2395/cgroup
	I1011 18:36:51.829406    5492 api_server.go:182] apiserver freezer: "7:freezer:/docker/cec2754799acaeb0048542b3a0fa56e5a7e191cb90f99ef58ba43f766f7a98f6/kubepods/burstable/podcfae1ec82264cd4b4f293c40e2a983f5/8da6e9611f80c7e2c230a9133007705b13b97b87be0ed8c1195781847b4ab96e"
	I1011 18:36:51.837452    5492 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cec2754799acaeb0048542b3a0fa56e5a7e191cb90f99ef58ba43f766f7a98f6/kubepods/burstable/podcfae1ec82264cd4b4f293c40e2a983f5/8da6e9611f80c7e2c230a9133007705b13b97b87be0ed8c1195781847b4ab96e/freezer.state
	I1011 18:36:51.857049    5492 api_server.go:204] freezer state: "THAWED"
	I1011 18:36:51.859141    5492 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50568/healthz ...
	I1011 18:36:51.873097    5492 api_server.go:279] https://127.0.0.1:50568/healthz returned 200:
	ok
	I1011 18:36:51.873097    5492 status.go:421] multinode-080800 apiserver status = Running (err=<nil>)
	I1011 18:36:51.873097    5492 status.go:257] multinode-080800 status: &{Name:multinode-080800 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1011 18:36:51.873097    5492 status.go:255] checking status of multinode-080800-m02 ...
	I1011 18:36:51.885638    5492 cli_runner.go:164] Run: docker container inspect multinode-080800-m02 --format={{.State.Status}}
	I1011 18:36:52.061933    5492 status.go:330] multinode-080800-m02 host status = "Running" (err=<nil>)
	I1011 18:36:52.061933    5492 host.go:66] Checking if "multinode-080800-m02" exists ...
	I1011 18:36:52.070240    5492 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-080800-m02
	I1011 18:36:52.264415    5492 host.go:66] Checking if "multinode-080800-m02" exists ...
	I1011 18:36:52.274738    5492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 18:36:52.280642    5492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-080800-m02
	I1011 18:36:52.453413    5492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50641 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\multinode-080800-m02\id_rsa Username:docker}
	I1011 18:36:52.588585    5492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 18:36:52.615266    5492 status.go:257] multinode-080800-m02 status: &{Name:multinode-080800-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1011 18:36:52.615404    5492 status.go:255] checking status of multinode-080800-m03 ...
	I1011 18:36:52.627878    5492 cli_runner.go:164] Run: docker container inspect multinode-080800-m03 --format={{.State.Status}}
	I1011 18:36:52.811007    5492 status.go:330] multinode-080800-m03 host status = "Stopped" (err=<nil>)
	I1011 18:36:52.811007    5492 status.go:343] host is not running, skipping remaining checks
	I1011 18:36:52.811007    5492 status.go:257] multinode-080800-m03 status: &{Name:multinode-080800-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (6.81s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (24.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 node start m03 --alsologtostderr: (21.7363397s)
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 status
multinode_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 status: (2.8174568s)
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (24.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (156.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-080800
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-080800
E1011 18:37:36.885706    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-080800: (26.3610707s)
multinode_test.go:295: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-080800 --wait=true -v=8 --alsologtostderr
E1011 18:38:02.714428    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
E1011 18:38:27.146734    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
E1011 18:39:33.655470    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
E1011 18:39:50.341882    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
multinode_test.go:295: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-080800 --wait=true -v=8 --alsologtostderr: (2m10.0731097s)
multinode_test.go:300: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-080800
--- PASS: TestMultiNode/serial/RestartKeepsNodes (156.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (14.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 node delete m03: (8.8678017s)
multinode_test.go:400: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 status --alsologtostderr
multinode_test.go:400: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 status --alsologtostderr: (5.0405011s)
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (14.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 stop
multinode_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 stop: (24.4147764s)
multinode_test.go:320: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-080800 status: exit status 7 (577.9692ms)

                                                
                                                
-- stdout --
	multinode-080800
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-080800-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 18:40:33.727612    5532 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:327: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-080800 status --alsologtostderr: exit status 7 (590.4661ms)

                                                
                                                
-- stdout --
	multinode-080800
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-080800-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 18:40:34.314210    3992 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I1011 18:40:34.380412    3992 out.go:296] Setting OutFile to fd 588 ...
	I1011 18:40:34.381143    3992 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1011 18:40:34.381143    3992 out.go:309] Setting ErrFile to fd 936...
	I1011 18:40:34.381143    3992 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1011 18:40:34.393997    3992 out.go:303] Setting JSON to false
	I1011 18:40:34.393997    3992 mustload.go:65] Loading cluster: multinode-080800
	I1011 18:40:34.393997    3992 notify.go:220] Checking for updates...
	I1011 18:40:34.394944    3992 config.go:182] Loaded profile config "multinode-080800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1011 18:40:34.394944    3992 status.go:255] checking status of multinode-080800 ...
	I1011 18:40:34.408884    3992 cli_runner.go:164] Run: docker container inspect multinode-080800 --format={{.State.Status}}
	I1011 18:40:34.590446    3992 status.go:330] multinode-080800 host status = "Stopped" (err=<nil>)
	I1011 18:40:34.590446    3992 status.go:343] host is not running, skipping remaining checks
	I1011 18:40:34.590547    3992 status.go:257] multinode-080800 status: &{Name:multinode-080800 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1011 18:40:34.590547    3992 status.go:255] checking status of multinode-080800-m02 ...
	I1011 18:40:34.602254    3992 cli_runner.go:164] Run: docker container inspect multinode-080800-m02 --format={{.State.Status}}
	I1011 18:40:34.761515    3992 status.go:330] multinode-080800-m02 host status = "Stopped" (err=<nil>)
	I1011 18:40:34.761692    3992 status.go:343] host is not running, skipping remaining checks
	I1011 18:40:34.761692    3992 status.go:257] multinode-080800-m02 status: &{Name:multinode-080800-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.58s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (103.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-080800 --wait=true -v=8 --alsologtostderr --driver=docker
multinode_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-080800 --wait=true -v=8 --alsologtostderr --driver=docker: (1m40.9019566s)
multinode_test.go:360: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-080800 status --alsologtostderr
multinode_test.go:360: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-080800 status --alsologtostderr: (2.0971195s)
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (103.64s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (76.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-080800
multinode_test.go:452: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-080800-m02 --driver=docker
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-080800-m02 --driver=docker: exit status 14 (283.6285ms)

                                                
                                                
-- stdout --
	* [multinode-080800-m02] minikube v1.31.2 on Microsoft Windows 10 Enterprise N 10.0.19045.3570 Build 19045.3570
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 18:42:18.779968    3924 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Profile name 'multinode-080800-m02' is duplicated with machine name 'multinode-080800-m02' in profile 'multinode-080800'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-080800-m03 --driver=docker
E1011 18:43:02.719237    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
E1011 18:43:27.141470    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
multinode_test.go:460: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-080800-m03 --driver=docker: (1m9.2475775s)
multinode_test.go:467: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-080800
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-080800: exit status 80 (1.5278132s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-080800
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 18:43:28.302082    7320 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-080800-m03 already exists in multinode-080800-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_e3f75f9fdd712fd5423563a6a11e787bf6359068_29.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-080800-m03
multinode_test.go:472: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-080800-m03: (4.9862501s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (76.27s)

                                                
                                    
x
+
TestPreload (216.05s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-270000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4
E1011 18:44:33.654849    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-270000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4: (2m9.2623176s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-270000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-270000 image pull gcr.io/k8s-minikube/busybox: (2.2271648s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-270000
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-270000: (12.62202s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-270000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-270000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker: (1m5.4345176s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-270000 image list
helpers_test.go:175: Cleaning up "test-preload-270000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-270000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-270000: (5.6565748s)
--- PASS: TestPreload (216.05s)

                                                
                                    
x
+
TestScheduledStopWindows (147.05s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-992100 --memory=2048 --driver=docker
E1011 18:48:02.711811    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
E1011 18:48:27.149422    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-992100 --memory=2048 --driver=docker: (1m16.6263531s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-992100 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-992100 --schedule 5m: (1.3947671s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-992100 -n scheduled-stop-992100
scheduled_stop_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-992100 -n scheduled-stop-992100: (1.3562216s)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-992100 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-992100 -- sudo systemctl show minikube-scheduled-stop --no-page: (1.2429777s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-992100 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-992100 --schedule 5s: (1.4070253s)
E1011 18:49:33.655058    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-992100
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-992100: exit status 7 (396.1285ms)

                                                
                                                
-- stdout --
	scheduled-stop-992100
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 18:49:45.476996    1408 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-992100 -n scheduled-stop-992100
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-992100 -n scheduled-stop-992100: exit status 7 (413.1631ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 18:49:45.858653    3776 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-992100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-992100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-992100: (4.2035307s)
--- PASS: TestScheduledStopWindows (147.05s)

                                                
                                    
x
+
TestInsufficientStorage (54.46s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-307400 --memory=2048 --output=json --wait=true --driver=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-307400 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (47.4426576s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"548013a6-105c-437e-a882-3e3224b8995f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-307400] minikube v1.31.2 on Microsoft Windows 10 Enterprise N 10.0.19045.3570 Build 19045.3570","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2bfc3a9c-ff88-4a2e-b69d-7d26bee82145","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"f058007b-9851-43fa-b760-3aaba2d30519","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"528dcf21-0dd8-403d-a573-bd3b1394fd6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"3ced3964-5536-49de-a965-b3696956ece6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17402"}}
	{"specversion":"1.0","id":"60a3e51b-ca02-4317-879d-96b63a861992","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"19a21d20-8468-4a4e-a960-3b70c13f3743","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"886f3e0b-178b-4637-8246-a2066071d969","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d284d9a0-10b2-4c54-9702-167b18dcfe0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"13126399-c963-4c5e-99fd-54fa7dbcc884","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"877391da-7185-47d4-a4f5-b4816b0cf023","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-307400 in cluster insufficient-storage-307400","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d7821810-1f85-4769-b23d-f018ffe45306","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f20100c7-e616-47d8-a4e7-871a63cb6230","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8ac4185b-59b2-4789-a22b-d39b96443216","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 18:49:50.480793    2948 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-307400 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-307400 --output=json --layout=cluster: exit status 7 (1.2812956s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-307400","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-307400","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 18:50:37.918190    7224 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1011 18:50:39.010208    7224 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-307400" does not appear in C:\Users\jenkins.minikube2\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-307400 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-307400 --output=json --layout=cluster: exit status 7 (1.2580488s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-307400","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-307400","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 18:50:39.195753    8724 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E1011 18:50:40.291654    8724 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-307400" does not appear in C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	E1011 18:50:40.327175    8724 status.go:559] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\insufficient-storage-307400\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-307400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-307400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-307400: (4.4701528s)
--- PASS: TestInsufficientStorage (54.46s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (296.63s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.3567370981.exe start -p running-upgrade-051900 --memory=2200 --vm-driver=docker
version_upgrade_test.go:133: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.3567370981.exe start -p running-upgrade-051900 --memory=2200 --vm-driver=docker: (2m57.5299789s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-051900 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-051900 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m46.6505728s)
helpers_test.go:175: Cleaning up "running-upgrade-051900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-051900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-051900: (11.6409728s)
--- PASS: TestRunningBinaryUpgrade (296.63s)

                                                
                                    
x
+
TestKubernetesUpgrade (395.83s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-378300 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-378300 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: (2m44.3171858s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-378300
E1011 18:56:30.352640    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
version_upgrade_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-378300: (13.1741453s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-378300 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-378300 status --format={{.Host}}: exit status 7 (503.1372ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 18:56:32.864666    2768 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-378300 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-378300 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker: (1m50.5365977s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-378300 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-378300 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-378300 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker: exit status 106 (435.5439ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-378300] minikube v1.31.2 on Microsoft Windows 10 Enterprise N 10.0.19045.3570 Build 19045.3570
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 18:58:24.204540    6212 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-378300
	    minikube start -p kubernetes-upgrade-378300 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3783002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.2, by running:
	    
	    minikube start -p kubernetes-upgrade-378300 --kubernetes-version=v1.28.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-378300 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker
E1011 18:58:27.158400    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
version_upgrade_test.go:288: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-378300 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker: (1m27.8085838s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-378300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-378300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-378300: (18.799944s)
--- PASS: TestKubernetesUpgrade (395.83s)

                                                
                                    
x
+
TestMissingContainerUpgrade (373.28s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.4107863879.exe start -p missing-upgrade-979800 --memory=2200 --driver=docker
version_upgrade_test.go:322: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.4107863879.exe start -p missing-upgrade-979800 --memory=2200 --driver=docker: (3m41.5880667s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-979800
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-979800: (11.2357259s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-979800
version_upgrade_test.go:342: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-979800 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:342: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-979800 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m2.4055006s)
helpers_test.go:175: Cleaning up "missing-upgrade-979800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-979800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-979800: (16.6569281s)
--- PASS: TestMissingContainerUpgrade (373.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-044100 --no-kubernetes --kubernetes-version=1.20 --driver=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-044100 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (447.7151ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-044100] minikube v1.31.2 on Microsoft Windows 10 Enterprise N 10.0.19045.3570 Build 19045.3570
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 18:50:44.997297    8724 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (125.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-044100 --driver=docker
E1011 18:51:06.573365    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
no_kubernetes_test.go:95: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-044100 --driver=docker: (2m3.3479349s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-044100 status -o json
no_kubernetes_test.go:200: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-044100 status -o json: (1.7376928s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (125.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (33.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-044100 --no-kubernetes --driver=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-044100 --no-kubernetes --driver=docker: (19.1702211s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-044100 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-044100 status -o json: exit status 2 (1.4172539s)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-044100","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 18:53:09.646104    2928 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-044100
no_kubernetes_test.go:124: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-044100: (12.9434178s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (33.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (49.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-044100 --no-kubernetes --driver=docker
E1011 18:53:27.151764    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
no_kubernetes_test.go:136: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-044100 --no-kubernetes --driver=docker: (49.4792072s)
--- PASS: TestNoKubernetes/serial/Start (49.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (1.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-044100 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-044100 "sudo systemctl is-active --quiet service kubelet": exit status 1 (1.4853003s)

                                                
                                                
** stderr ** 
	W1011 18:54:13.503401    4356 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (1.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (7.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe profile list
E1011 18:54:16.896619    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
no_kubernetes_test.go:169: (dbg) Done: out/minikube-windows-amd64.exe profile list: (3.9107765s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (4.0322717s)
--- PASS: TestNoKubernetes/serial/ProfileList (7.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (8.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-044100
no_kubernetes_test.go:158: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-044100: (8.2433797s)
--- PASS: TestNoKubernetes/serial/Stop (8.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (22.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-044100 --driver=docker
E1011 18:54:33.649189    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
no_kubernetes_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-044100 --driver=docker: (22.4662396s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (22.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (1.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-044100 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-044100 "sudo systemctl is-active --quiet service kubelet": exit status 1 (1.6497843s)

                                                
                                                
** stderr ** 
	W1011 18:54:53.673253    9576 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (1.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (219.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.1402577031.exe start -p stopped-upgrade-318200 --memory=2200 --vm-driver=docker
version_upgrade_test.go:196: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.1402577031.exe start -p stopped-upgrade-318200 --memory=2200 --vm-driver=docker: (1m57.9649178s)
version_upgrade_test.go:205: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.1402577031.exe -p stopped-upgrade-318200 stop
version_upgrade_test.go:205: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.1402577031.exe -p stopped-upgrade-318200 stop: (8.0824594s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-318200 --memory=2200 --alsologtostderr -v=1 --driver=docker
E1011 18:58:02.716640    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
version_upgrade_test.go:211: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-318200 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m33.7700554s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (219.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (4.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-318200
version_upgrade_test.go:219: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-318200: (4.3531307s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (4.35s)

                                                
                                    
x
+
TestPause/serial/Start (130.82s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-375900 --memory=2048 --install-addons=false --wait=all --driver=docker
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-375900 --memory=2048 --install-addons=false --wait=all --driver=docker: (2m10.8212358s)
--- PASS: TestPause/serial/Start (130.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (197.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-796400 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-796400 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: (3m17.9499868s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (197.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (173.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-517500 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-517500 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.28.2: (2m53.1405243s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (173.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (122.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-164000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-164000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.28.2: (2m2.1840655s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (122.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (116.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-822300 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.28.2
E1011 19:03:27.155334    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
E1011 19:04:33.651086    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-822300 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.28.2: (1m56.3859899s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (116.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-517500 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b973fab1-2950-4b9f-a55c-99f6fbe169c3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b973fab1-2950-4b9f-a55c-99f6fbe169c3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.0544498s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-517500 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-517500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-517500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.8220344s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-517500 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-517500 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-517500 --alsologtostderr -v=3: (12.9568625s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-796400 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [93c0f597-74d7-4c56-bb8d-751286ec4ea8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [93c0f597-74d7-4c56-bb8d-751286ec4ea8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.0569385s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-796400 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-517500 -n no-preload-517500
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-517500 -n no-preload-517500: exit status 7 (474.0255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 19:05:02.589329   10476 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-517500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (1.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (368.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-517500 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-517500 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.28.2: (6m6.8827447s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-517500 -n no-preload-517500
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-517500 -n no-preload-517500: (1.9310481s)
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (368.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-796400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-796400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.5274378s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-796400 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-796400 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-796400 --alsologtostderr -v=3: (13.1879705s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-822300 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [eb6a50dd-208d-4c56-8d1a-8fd7e224d32b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [eb6a50dd-208d-4c56-8d1a-8fd7e224d32b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.0815018s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-822300 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-164000 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8dd7a98d-e993-418f-8308-d73007588be6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8dd7a98d-e993-418f-8308-d73007588be6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.0445059s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-164000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-796400 -n old-k8s-version-796400
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-796400 -n old-k8s-version-796400: exit status 7 (502.6814ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 19:05:25.963357   10032 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-796400 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (1.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-822300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-822300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.8530034s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-822300 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (472.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-796400 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-796400 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: (7m49.7193407s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-796400 -n old-k8s-version-796400
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-796400 -n old-k8s-version-796400: (2.6776145s)
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (472.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-164000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-164000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.7615079s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-164000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-822300 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-822300 --alsologtostderr -v=3: (13.2452613s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-164000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-164000 --alsologtostderr -v=3: (13.4163665s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-822300 -n default-k8s-diff-port-822300
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-822300 -n default-k8s-diff-port-822300: exit status 7 (484.9883ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 19:05:42.510725    6808 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-822300 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-164000 -n embed-certs-164000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-164000 -n embed-certs-164000: exit status 7 (489.9955ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 19:05:43.723462    5388 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-164000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (1.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (385.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-822300 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-822300 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.28.2: (6m23.4170553s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-822300 -n default-k8s-diff-port-822300
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-822300 -n default-k8s-diff-port-822300: (1.7680275s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (385.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (371.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-164000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.28.2
E1011 19:07:46.589631    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
E1011 19:08:02.720557    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
E1011 19:08:27.155883    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
E1011 19:09:33.660172    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
E1011 19:10:56.907345    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-164000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.28.2: (6m8.8922662s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-164000 -n embed-certs-164000
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-164000 -n embed-certs-164000: (2.167395s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (371.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (60.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-65mbd" [386d2c44-c987-4fad-9ddc-0f2252c2f5e0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-65mbd" [386d2c44-c987-4fad-9ddc-0f2252c2f5e0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 1m0.1231395s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (60.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (39.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-442vn" [2ab41a29-e91a-4766-9c53-e962721e1d15] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-442vn" [2ab41a29-e91a-4766-9c53-e962721e1d15] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 39.1406016s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (39.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (52.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4xfks" [dbf39654-d0cf-426b-a88c-b858bc111a69] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4xfks" [dbf39654-d0cf-426b-a88c-b858bc111a69] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 52.1315913s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (52.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-65mbd" [386d2c44-c987-4fad-9ddc-0f2252c2f5e0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0409839s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-517500 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (1.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p no-preload-517500 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p no-preload-517500 "sudo crictl images -o json": (1.790883s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (1.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (14.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-517500 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-517500 --alsologtostderr -v=1: (2.2882225s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-517500 -n no-preload-517500
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-517500 -n no-preload-517500: exit status 2 (1.6926826s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 19:12:22.314741    9864 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-517500 -n no-preload-517500
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-517500 -n no-preload-517500: exit status 2 (2.0827028s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 19:12:24.015406   10720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-517500 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-517500 --alsologtostderr -v=1: (3.0580922s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-517500 -n no-preload-517500
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-517500 -n no-preload-517500: (2.7247603s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-517500 -n no-preload-517500
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-517500 -n no-preload-517500: (2.1691632s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (14.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (22.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-442vn" [2ab41a29-e91a-4766-9c53-e962721e1d15] Running
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: etcdserver: request timed out
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 22.1010084s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-164000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (22.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (169.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-452600 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-452600 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.28.2: (2m49.170797s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (169.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p embed-certs-164000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p embed-certs-164000 "sudo crictl images -o json": (2.6414197s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (2.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (18.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-164000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-164000 --alsologtostderr -v=1: (3.7566419s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-164000 -n embed-certs-164000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-164000 -n embed-certs-164000: exit status 2 (1.898065s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 19:13:04.478405   10060 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-164000 -n embed-certs-164000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-164000 -n embed-certs-164000: exit status 2 (1.9199005s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 19:13:06.388185    2376 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-164000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-164000 --alsologtostderr -v=1: (2.3278548s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-164000 -n embed-certs-164000
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-164000 -n embed-certs-164000: (5.7918569s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-164000 -n embed-certs-164000
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-164000 -n embed-certs-164000: (2.4381672s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (18.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4xfks" [dbf39654-d0cf-426b-a88c-b858bc111a69] Running
E1011 19:13:02.724097    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0982549s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-822300 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (2.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p default-k8s-diff-port-822300 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p default-k8s-diff-port-822300 "sudo crictl images -o json": (2.1354294s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (2.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (16.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-822300 --alsologtostderr -v=1
E1011 19:13:10.362670    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-822300 --alsologtostderr -v=1: (2.9517101s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-822300 -n default-k8s-diff-port-822300
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-822300 -n default-k8s-diff-port-822300: exit status 2 (1.8045321s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 19:13:11.859911    8580 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-822300 -n default-k8s-diff-port-822300
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-822300 -n default-k8s-diff-port-822300: exit status 2 (1.536498s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 19:13:13.629148    2204 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-822300 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-822300 --alsologtostderr -v=1: (3.8206134s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-822300 -n default-k8s-diff-port-822300
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-822300 -n default-k8s-diff-port-822300: (3.0348374s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-822300 -n default-k8s-diff-port-822300
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-822300 -n default-k8s-diff-port-822300: (3.0160504s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (16.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (55.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-gp7lz" [0526acc3-2808-4f3a-b9ae-7d809bdfa5af] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: etcdserver: request timed out
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: etcdserver: request timed out
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-gp7lz" [0526acc3-2808-4f3a-b9ae-7d809bdfa5af] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 55.1632408s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (55.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (140.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-035800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-035800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker: (2m20.7071154s)
--- PASS: TestNetworkPlugins/group/auto/Start (140.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (146.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-035800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-035800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker: (2m26.5117807s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (146.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-gp7lz" [0526acc3-2808-4f3a-b9ae-7d809bdfa5af] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0381153s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-796400 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p old-k8s-version-796400 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p old-k8s-version-796400 "sudo crictl images -o json": (1.5511541s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (13.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-796400 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-796400 --alsologtostderr -v=1: (2.2600899s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-796400 -n old-k8s-version-796400
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-796400 -n old-k8s-version-796400: exit status 2 (1.6196981s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 19:14:24.596882    8144 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-796400 -n old-k8s-version-796400
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-796400 -n old-k8s-version-796400: exit status 2 (1.5981915s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 19:14:26.231908   10636 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-796400 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-796400 --alsologtostderr -v=1: (4.3713613s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-796400 -n old-k8s-version-796400
E1011 19:14:33.663474    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-796400 -n old-k8s-version-796400: (1.9262819s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-796400 -n old-k8s-version-796400
E1011 19:14:34.879559    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-517500\client.crt: The system cannot find the path specified.
E1011 19:14:34.894984    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-517500\client.crt: The system cannot find the path specified.
E1011 19:14:34.910087    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-517500\client.crt: The system cannot find the path specified.
E1011 19:14:34.941714    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-517500\client.crt: The system cannot find the path specified.
E1011 19:14:34.991368    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-517500\client.crt: The system cannot find the path specified.
E1011 19:14:35.086650    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-517500\client.crt: The system cannot find the path specified.
E1011 19:14:35.252002    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-517500\client.crt: The system cannot find the path specified.
E1011 19:14:35.581568    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-517500\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-796400 -n old-k8s-version-796400: (1.9513821s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (13.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (256.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-035800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker
E1011 19:15:15.507844    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-822300\client.crt: The system cannot find the path specified.
E1011 19:15:15.522274    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-822300\client.crt: The system cannot find the path specified.
E1011 19:15:15.537378    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-822300\client.crt: The system cannot find the path specified.
E1011 19:15:15.567681    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-822300\client.crt: The system cannot find the path specified.
E1011 19:15:15.616324    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-822300\client.crt: The system cannot find the path specified.
E1011 19:15:15.710318    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-822300\client.crt: The system cannot find the path specified.
E1011 19:15:15.880507    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-822300\client.crt: The system cannot find the path specified.
E1011 19:15:15.965083    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-517500\client.crt: The system cannot find the path specified.
E1011 19:15:16.212535    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-822300\client.crt: The system cannot find the path specified.
E1011 19:15:16.861314    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-822300\client.crt: The system cannot find the path specified.
E1011 19:15:18.148212    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-822300\client.crt: The system cannot find the path specified.
E1011 19:15:20.715775    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-822300\client.crt: The system cannot find the path specified.
E1011 19:15:25.842119    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-822300\client.crt: The system cannot find the path specified.
E1011 19:15:36.089627    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-822300\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-035800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker: (4m16.1402553s)
--- PASS: TestNetworkPlugins/group/calico/Start (256.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-452600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-452600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.9124407s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-452600 --alsologtostderr -v=3
E1011 19:15:56.579213    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-822300\client.crt: The system cannot find the path specified.
E1011 19:15:56.933128    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-517500\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-452600 --alsologtostderr -v=3: (13.6168557s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-452600 -n newest-cni-452600
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-452600 -n newest-cni-452600: exit status 7 (545.8176ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 19:16:01.979665     720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-452600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (67.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-452600 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-452600 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.28.2: (1m4.4481291s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-452600 -n newest-cni-452600
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-452600 -n newest-cni-452600: (2.5645649s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (67.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (1.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-035800 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p auto-035800 "pgrep -a kubelet": (1.5465114s)
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (1.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (22.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-035800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context auto-035800 replace --force -f testdata\netcat-deployment.yaml: (1.1425251s)
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-khbcb" [7345f94b-69f8-4974-af54-a89b2ed5b43d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-khbcb" [7345f94b-69f8-4974-af54-a89b2ed5b43d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 21.101739s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (22.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-d884t" [fa933137-4877-442d-bfe9-db6a7aa3f336] Running
E1011 19:16:37.557088    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-822300\client.crt: The system cannot find the path specified.
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.066305s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (1.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-035800 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kindnet-035800 "pgrep -a kubelet": (1.6052211s)
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (1.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-035800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (25.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-035800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context kindnet-035800 replace --force -f testdata\netcat-deployment.yaml: (1.0554954s)
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mz4mx" [952f5141-3cfb-450f-9d49-6e669ed4570c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mz4mx" [952f5141-3cfb-450f-9d49-6e669ed4570c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 24.0720831s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (25.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-035800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-035800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-035800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-035800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-035800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (2.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p newest-cni-452600 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p newest-cni-452600 "sudo crictl images -o json": (2.1150377s)
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (2.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (19.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-452600 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-452600 --alsologtostderr -v=1: (3.1856353s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-452600 -n newest-cni-452600
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-452600 -n newest-cni-452600: exit status 2 (2.5108292s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 19:17:15.838905   10936 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-452600 -n newest-cni-452600
E1011 19:17:18.866222    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-517500\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-452600 -n newest-cni-452600: exit status 2 (2.4698693s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W1011 19:17:18.354127    7208 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-452600 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-452600 --alsologtostderr -v=1: (4.5716424s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-452600 -n newest-cni-452600
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-452600 -n newest-cni-452600: (3.8306252s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-452600 -n newest-cni-452600
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-452600 -n newest-cni-452600: (3.186586s)
--- PASS: TestStartStop/group/newest-cni/serial/Pause (19.76s)
E1011 19:22:58.567085    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-035800\client.crt: The system cannot find the path specified.
E1011 19:23:02.737930    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
E1011 19:23:27.162226    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-860400\client.crt: The system cannot find the path specified.
E1011 19:24:05.527723    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-035800\client.crt: The system cannot find the path specified.
E1011 19:24:19.282750    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-035800\client.crt: The system cannot find the path specified.
E1011 19:24:19.298106    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-035800\client.crt: The system cannot find the path specified.
E1011 19:24:19.313186    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-035800\client.crt: The system cannot find the path specified.
E1011 19:24:19.339741    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-035800\client.crt: The system cannot find the path specified.
E1011 19:24:19.385982    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-035800\client.crt: The system cannot find the path specified.
E1011 19:24:19.472413    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-035800\client.crt: The system cannot find the path specified.
E1011 19:24:19.646955    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-035800\client.crt: The system cannot find the path specified.
E1011 19:24:19.980110    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-035800\client.crt: The system cannot find the path specified.
E1011 19:24:20.488897    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-035800\client.crt: The system cannot find the path specified.
E1011 19:24:20.629252    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-035800\client.crt: The system cannot find the path specified.
E1011 19:24:21.928355    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-035800\client.crt: The system cannot find the path specified.

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (150.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-035800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-035800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker: (2m30.1374616s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (150.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (122.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-035800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-035800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker: (2m2.935611s)
--- PASS: TestNetworkPlugins/group/false/Start (122.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (117.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-035800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-035800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker: (1m57.7977554s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (117.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4ltnj" [5adf1789-a0e2-4ebe-9294-5b139f206dfc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.1444955s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (1.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-035800 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p calico-035800 "pgrep -a kubelet": (1.6838385s)
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (1.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (27.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-035800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context calico-035800 replace --force -f testdata\netcat-deployment.yaml: (1.3257915s)
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-j6z27" [cea63b09-5f2e-4c76-8927-6285a4452f4b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1011 19:19:33.665012    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
E1011 19:19:34.880629    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-517500\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-56589dfd74-j6z27" [cea63b09-5f2e-4c76-8927-6285a4452f4b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 26.0539273s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (27.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-035800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-035800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-035800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (2.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-035800 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p false-035800 "pgrep -a kubelet": (2.1205594s)
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (2.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (1.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-035800 "pgrep -a kubelet"
E1011 19:20:19.961888    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-796400\client.crt: The system cannot find the path specified.
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p custom-flannel-035800 "pgrep -a kubelet": (1.8791735s)
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (1.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (28.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-035800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context custom-flannel-035800 replace --force -f testdata\netcat-deployment.yaml: (1.3408406s)
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zsgsb" [0dc8f4a0-429a-4675-9de9-cb19ac70132a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1011 19:20:40.464456    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-796400\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-56589dfd74-zsgsb" [0dc8f4a0-429a-4675-9de9-cb19ac70132a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 27.1275143s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (28.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (28.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-035800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context false-035800 replace --force -f testdata\netcat-deployment.yaml: (1.2485775s)
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jstd4" [2a2038e5-7890-49c0-a6bb-4eb76fb337ed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jstd4" [2a2038e5-7890-49c0-a6bb-4eb76fb337ed] Running
E1011 19:20:43.320757    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-822300\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 27.1291143s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (28.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (1.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-035800 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-035800 "pgrep -a kubelet": (1.615209s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (1.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (27.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-035800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-q2zcj" [dce21f3a-4666-41c8-bef4-bc471a91f729] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-q2zcj" [dce21f3a-4666-41c8-bef4-bc471a91f729] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 25.2045158s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (27.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-035800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-035800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-035800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-035800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-035800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-035800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-035800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-035800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-035800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (173.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-035800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker
E1011 19:21:31.901530    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-035800\client.crt: The system cannot find the path specified.
E1011 19:21:36.531045    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-035800\client.crt: The system cannot find the path specified.
E1011 19:21:36.546767    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-035800\client.crt: The system cannot find the path specified.
E1011 19:21:36.561861    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-035800\client.crt: The system cannot find the path specified.
E1011 19:21:36.593293    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-035800\client.crt: The system cannot find the path specified.
E1011 19:21:36.639487    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-035800\client.crt: The system cannot find the path specified.
E1011 19:21:36.728791    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-035800\client.crt: The system cannot find the path specified.
E1011 19:21:36.904194    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-035800\client.crt: The system cannot find the path specified.
E1011 19:21:37.233144    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-035800\client.crt: The system cannot find the path specified.
E1011 19:21:37.875381    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-035800\client.crt: The system cannot find the path specified.
E1011 19:21:39.160716    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-035800\client.crt: The system cannot find the path specified.
E1011 19:21:41.735521    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-035800\client.crt: The system cannot find the path specified.
E1011 19:21:42.149927    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-035800\client.crt: The system cannot find the path specified.
E1011 19:21:46.856308    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-035800\client.crt: The system cannot find the path specified.
E1011 19:21:57.100472    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-035800\client.crt: The system cannot find the path specified.
E1011 19:22:02.635755    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-035800\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-035800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker: (2m53.6278056s)
--- PASS: TestNetworkPlugins/group/flannel/Start (173.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (121.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-035800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-035800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker: (2m1.3436679s)
--- PASS: TestNetworkPlugins/group/bridge/Start (121.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (141.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-035800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker
E1011 19:22:43.367259    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-796400\client.crt: The system cannot find the path specified.
E1011 19:22:43.597371    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-035800\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-035800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker: (2m21.5510941s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (141.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-jgx8p" [98284ab0-4136-47ee-ac9d-ca66d9ba3e1d] Running
E1011 19:24:24.493570    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-035800\client.crt: The system cannot find the path specified.
E1011 19:24:26.600873    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.0687333s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (1.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-035800 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p flannel-035800 "pgrep -a kubelet": (1.4072431s)
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (1.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (1.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-035800 "pgrep -a kubelet"
E1011 19:24:29.632126    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-035800\client.crt: The system cannot find the path specified.
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p bridge-035800 "pgrep -a kubelet": (1.5518335s)
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (1.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (25.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-035800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context flannel-035800 replace --force -f testdata\netcat-deployment.yaml: (1.0359696s)
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-t4n2f" [e265e47e-5d34-4aa2-8676-e5f887e6f82e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-t4n2f" [e265e47e-5d34-4aa2-8676-e5f887e6f82e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 24.065758s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (25.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (24.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-035800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context bridge-035800 replace --force -f testdata\netcat-deployment.yaml: (1.0522158s)
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-q7nqc" [82193d05-670c-4ffe-bf44-6cd26f4e9bb5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1011 19:24:33.658811    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-642200\client.crt: The system cannot find the path specified.
E1011 19:24:34.874625    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-517500\client.crt: The system cannot find the path specified.
E1011 19:24:39.887195    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-035800\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-56589dfd74-q7nqc" [82193d05-670c-4ffe-bf44-6cd26f4e9bb5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 23.095672s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (24.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-035800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-035800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-035800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-035800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-035800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-035800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (1.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-035800 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kubenet-035800 "pgrep -a kubelet": (1.3115793s)
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (1.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (22.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-035800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fsmd9" [53027ad9-302a-4c82-8d97-fdd273214271] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1011 19:25:15.505247    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-822300\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-56589dfd74-fsmd9" [53027ad9-302a-4c82-8d97-fdd273214271] Running
E1011 19:25:21.356538    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\false-035800\client.crt: The system cannot find the path specified.
E1011 19:25:21.368541    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\false-035800\client.crt: The system cannot find the path specified.
E1011 19:25:21.383552    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\false-035800\client.crt: The system cannot find the path specified.
E1011 19:25:21.407543    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\false-035800\client.crt: The system cannot find the path specified.
E1011 19:25:21.448534    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\false-035800\client.crt: The system cannot find the path specified.
E1011 19:25:21.448534    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\custom-flannel-035800\client.crt: The system cannot find the path specified.
E1011 19:25:21.480532    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\custom-flannel-035800\client.crt: The system cannot find the path specified.
E1011 19:25:21.496542    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\custom-flannel-035800\client.crt: The system cannot find the path specified.
E1011 19:25:21.528546    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\false-035800\client.crt: The system cannot find the path specified.
E1011 19:25:21.528546    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\custom-flannel-035800\client.crt: The system cannot find the path specified.
E1011 19:25:21.590490    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\custom-flannel-035800\client.crt: The system cannot find the path specified.
E1011 19:25:21.683832    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\custom-flannel-035800\client.crt: The system cannot find the path specified.
E1011 19:25:21.698833    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\false-035800\client.crt: The system cannot find the path specified.
E1011 19:25:21.853693    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\custom-flannel-035800\client.crt: The system cannot find the path specified.
E1011 19:25:22.025367    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\false-035800\client.crt: The system cannot find the path specified.
E1011 19:25:22.180790    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\custom-flannel-035800\client.crt: The system cannot find the path specified.
E1011 19:25:22.678831    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\false-035800\client.crt: The system cannot find the path specified.
E1011 19:25:22.826203    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\custom-flannel-035800\client.crt: The system cannot find the path specified.
E1011 19:25:23.966854    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\false-035800\client.crt: The system cannot find the path specified.
E1011 19:25:24.106955    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\custom-flannel-035800\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 22.0367275s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (22.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-035800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-035800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-035800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.47s)

                                                
                                    

Test skip (26/314)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (21.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 111.2998ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6mntv" [c4fa6c59-9047-4397-b1e9-1b75d93256f0] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.2110356s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4nd5k" [958d43f3-7c73-4d14-9afb-277945ab7450] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0855625s
addons_test.go:339: (dbg) Run:  kubectl --context addons-642200 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-642200 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-642200 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (10.7851931s)
addons_test.go:354: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (21.45s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (53.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-642200 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-642200 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:231: (dbg) Done: kubectl --context addons-642200 replace --force -f testdata\nginx-ingress-v1.yaml: (2.5014221s)
addons_test.go:244: (dbg) Run:  kubectl --context addons-642200 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [33e6ce87-5a0e-48a8-b9bb-9e5598bb71bf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [33e6ce87-5a0e-48a8-b9bb-9e5598bb71bf] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 47.1695703s
addons_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-642200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p addons-642200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (1.6445048s)
addons_test.go:268: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-642200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W1011 18:00:34.477145    9784 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:281: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (53.29s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-420000 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-420000 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 10160: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-420000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-420000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-wj762" [baf2f3f2-6867-47a2-b905-d753b2c15ffc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-wj762" [baf2f3f2-6867-47a2-b905-d753b2c15ffc] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 35.207532s
functional_test.go:1645: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (36.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (56.35s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-860400 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-860400 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.544865s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-860400 replace --force -f testdata\nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-860400 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [46c901f7-6cfb-40d1-9fe0-a4310ab5064f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1011 18:18:44.443288    1556 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-420000\client.crt: The system cannot find the path specified.
helpers_test.go:344: "nginx" [46c901f7-6cfb-40d1-9fe0-a4310ab5064f] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 41.1700846s
addons_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-860400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-860400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (1.1072304s)
addons_test.go:268: debug: unexpected stderr for out/minikube-windows-amd64.exe -p ingress-addon-legacy-860400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W1011 18:19:22.485542    5840 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:281: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestIngressAddonLegacy/serial/ValidateIngressAddons (56.35s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-167400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-167400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-167400: (1.1941964s)
--- SKIP: TestStartStop/group/disable-driver-mounts (1.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (14.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-035800 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-035800

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-035800

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-035800

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-035800

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-035800

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-035800

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-035800

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-035800

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-035800

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-035800

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
W1011 19:01:09.465237   10608 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
W1011 19:01:09.780015   10060 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
W1011 19:01:10.076400    5696 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-035800

                                                
                                                

                                                
                                                
>>> host: crictl pods:
W1011 19:01:10.534087    6200 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
W1011 19:01:10.823057    1608 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-035800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-035800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-035800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-035800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-035800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-035800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-035800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-035800" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
W1011 19:01:12.095340     908 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: ip a s:
W1011 19:01:12.342642    7916 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: ip r s:
W1011 19:01:12.593719    9712 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
W1011 19:01:12.858778    1792 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
W1011 19:01:13.126335    4292 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-035800

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-035800

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-035800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-035800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-035800

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-035800

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-035800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-035800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-035800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-035800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-035800" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
W1011 19:01:14.864373    7376 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
W1011 19:01:15.132928   10884 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
W1011 19:01:15.423856    8068 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
W1011 19:01:15.698010    3144 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
W1011 19:01:15.970239    2588 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt
extensions:
- extension:
last-update: Wed, 11 Oct 2023 19:00:40 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://127.0.0.1:52534
name: pause-375900
- cluster:
certificate-authority: C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 11 Oct 2023 19:00:49 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://127.0.0.1:52204
name: running-upgrade-051900
contexts:
- context:
cluster: pause-375900
extensions:
- extension:
last-update: Wed, 11 Oct 2023 19:00:40 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-375900
name: pause-375900
- context:
cluster: running-upgrade-051900
user: running-upgrade-051900
name: running-upgrade-051900
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-375900
user:
client-certificate: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\pause-375900\client.crt
client-key: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\pause-375900\client.key
- name: running-upgrade-051900
user:
client-certificate: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\running-upgrade-051900/client.crt
client-key: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\running-upgrade-051900/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-035800

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
W1011 19:01:16.518995    1900 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
W1011 19:01:16.779619    9416 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
W1011 19:01:17.055137    5180 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: docker system info:
W1011 19:01:17.322628    5088 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
W1011 19:01:17.583547    8320 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
W1011 19:01:17.881816   11140 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
W1011 19:01:18.142566   10472 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
W1011 19:01:18.409131    3188 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
W1011 19:01:18.659549   10940 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
W1011 19:01:18.911966    4600 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
W1011 19:01:19.194410    9772 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
W1011 19:01:19.471478   10520 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
W1011 19:01:19.742287    2408 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
W1011 19:01:20.012401   11124 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
W1011 19:01:20.302292    8276 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
W1011 19:01:20.591364    8780 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
W1011 19:01:20.893448    7480 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                

                                                
                                                
>>> host: crio config:
W1011 19:01:21.191024   10768 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube2\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-035800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035800"

                                                
                                                
----------------------- debugLogs end: cilium-035800 [took: 13.4513978s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-035800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-035800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cilium-035800: (1.477054s)
--- SKIP: TestNetworkPlugins/group/cilium (14.93s)

                                                
                                    
Copied to clipboard