Test Report: Docker_Windows 16056

                    
                      f0e0e69dc18db6ea0a802435c9a958d9c2c0ce2a:2023-03-15:28333
                    
                

Test fail (1/305)

Order failed test Duration
265 TestPause/serial/SecondStartNoReconfiguration 134.75
x
+
TestPause/serial/SecondStartNoReconfiguration (134.75s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-073300 --alsologtostderr -v=1 --driver=docker
E0315 21:14:48.849913    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-073300 --alsologtostderr -v=1 --driver=docker: (1m53.1374961s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-073300] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2728 Build 19045.2728
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16056
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node pause-073300 in cluster pause-073300
	* Pulling base image ...
	* Updating the running docker "pause-073300" container ...
	* Preparing Kubernetes v1.26.2 on Docker 23.0.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	
	  - Want kubectl v1.26.2? Try 'minikube kubectl -- get pods -A'
	* Done! kubectl is now configured to use "pause-073300" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 21:14:40.505339    1332 out.go:296] Setting OutFile to fd 1676 ...
	I0315 21:14:40.620890    1332 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0315 21:14:40.620890    1332 out.go:309] Setting ErrFile to fd 1712...
	I0315 21:14:40.620966    1332 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0315 21:14:40.659650    1332 out.go:303] Setting JSON to false
	I0315 21:14:40.665175    1332 start.go:125] hostinfo: {"hostname":"minikube1","uptime":24283,"bootTime":1678890597,"procs":157,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2728 Build 19045.2728","kernelVersion":"10.0.19045.2728 Build 19045.2728","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0315 21:14:40.665175    1332 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0315 21:14:40.674659    1332 out.go:177] * [pause-073300] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2728 Build 19045.2728
	I0315 21:14:40.679461    1332 notify.go:220] Checking for updates...
	I0315 21:14:40.682314    1332 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0315 21:14:40.688587    1332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 21:14:40.695397    1332 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0315 21:14:40.697893    1332 out.go:177]   - MINIKUBE_LOCATION=16056
	I0315 21:14:40.700645    1332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 21:14:40.705148    1332 config.go:182] Loaded profile config "pause-073300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0315 21:14:40.707350    1332 driver.go:365] Setting default libvirt URI to qemu:///system
	I0315 21:14:41.257936    1332 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0315 21:14:41.274456    1332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 21:14:42.518964    1332 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.2443346s)
	I0315 21:14:42.520258    1332 info.go:266] docker info: {ID:5XVN:YLWI:D57U:VRY6:Z2T2:XT44:UTQY:SUTG:X4EL:3KBQ:R56A:SLJU Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:104 OomKillDisable:true NGoroutines:81 SystemTime:2023-03-15 21:14:41.5899542 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccom
p,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path:C:\Program Files\Docker\cli-pl
ugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0315 21:14:42.524614    1332 out.go:177] * Using the docker driver based on existing profile
	I0315 21:14:42.527717    1332 start.go:296] selected driver: docker
	I0315 21:14:42.527717    1332 start.go:857] validating driver "docker" against &{Name:pause-073300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:pause-073300 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage
-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0315 21:14:42.527717    1332 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 21:14:42.561390    1332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 21:14:43.732580    1332 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.1694367s)
	I0315 21:14:43.732798    1332 info.go:266] docker info: {ID:5XVN:YLWI:D57U:VRY6:Z2T2:XT44:UTQY:SUTG:X4EL:3KBQ:R56A:SLJU Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:102 OomKillDisable:true NGoroutines:79 SystemTime:2023-03-15 21:14:42.8469155 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccom
p,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path:C:\Program Files\Docker\cli-pl
ugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0315 21:14:43.829162    1332 cni.go:84] Creating CNI manager for ""
	I0315 21:14:43.829305    1332 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0315 21:14:43.829305    1332 start_flags.go:319] config:
	{Name:pause-073300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:pause-073300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] Cust
omAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0315 21:14:43.833646    1332 out.go:177] * Starting control plane node pause-073300 in cluster pause-073300
	I0315 21:14:43.836862    1332 cache.go:120] Beginning downloading kic base image for docker with docker
	I0315 21:14:43.839542    1332 out.go:177] * Pulling base image ...
	I0315 21:14:43.842733    1332 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0315 21:14:43.842733    1332 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f in local docker daemon
	I0315 21:14:43.842993    1332 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4
	I0315 21:14:43.842993    1332 cache.go:57] Caching tarball of preloaded images
	I0315 21:14:43.843643    1332 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0315 21:14:43.843643    1332 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.2 on docker
	I0315 21:14:43.844172    1332 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-073300\config.json ...
	I0315 21:14:44.192908    1332 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f in local docker daemon, skipping pull
	I0315 21:14:44.192908    1332 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f exists in daemon, skipping load
	I0315 21:14:44.192908    1332 cache.go:193] Successfully downloaded all kic artifacts
	I0315 21:14:44.192908    1332 start.go:364] acquiring machines lock for pause-073300: {Name:mkb8165e31048686f4d7bcff493eb42dbfcbb659 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 21:14:44.192908    1332 start.go:368] acquired machines lock for "pause-073300" in 0s
	I0315 21:14:44.192908    1332 start.go:96] Skipping create...Using existing machine configuration
	I0315 21:14:44.192908    1332 fix.go:55] fixHost starting: 
	I0315 21:14:44.236201    1332 cli_runner.go:164] Run: docker container inspect pause-073300 --format={{.State.Status}}
	I0315 21:14:44.651203    1332 fix.go:103] recreateIfNeeded on pause-073300: state=Running err=<nil>
	W0315 21:14:44.651203    1332 fix.go:129] unexpected machine state, will restart: <nil>
	I0315 21:14:44.655921    1332 out.go:177] * Updating the running docker "pause-073300" container ...
	I0315 21:14:44.659043    1332 machine.go:88] provisioning docker machine ...
	I0315 21:14:44.659402    1332 ubuntu.go:169] provisioning hostname "pause-073300"
	I0315 21:14:44.679272    1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-073300
	I0315 21:14:45.061246    1332 main.go:141] libmachine: Using SSH client type: native
	I0315 21:14:45.063308    1332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil>  [] 0s} 127.0.0.1 65160 <nil> <nil>}
	I0315 21:14:45.063308    1332 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-073300 && echo "pause-073300" | sudo tee /etc/hostname
	I0315 21:14:45.531996    1332 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-073300
	
	I0315 21:14:45.550944    1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-073300
	I0315 21:14:45.983958    1332 main.go:141] libmachine: Using SSH client type: native
	I0315 21:14:45.985679    1332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil>  [] 0s} 127.0.0.1 65160 <nil> <nil>}
	I0315 21:14:45.985775    1332 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-073300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-073300/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-073300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 21:14:46.354582    1332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 21:14:46.354764    1332 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0315 21:14:46.354764    1332 ubuntu.go:177] setting up certificates
	I0315 21:14:46.354764    1332 provision.go:83] configureAuth start
	I0315 21:14:46.373654    1332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-073300
	I0315 21:14:46.752009    1332 provision.go:138] copyHostCerts
	I0315 21:14:46.753909    1332 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0315 21:14:46.753909    1332 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0315 21:14:46.756980    1332 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0315 21:14:46.760942    1332 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0315 21:14:46.760942    1332 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0315 21:14:46.760942    1332 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0315 21:14:46.763297    1332 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0315 21:14:46.763297    1332 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0315 21:14:46.763953    1332 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0315 21:14:46.765251    1332 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.pause-073300 san=[192.168.103.2 127.0.0.1 localhost 127.0.0.1 minikube pause-073300]
	I0315 21:14:47.103137    1332 provision.go:172] copyRemoteCerts
	I0315 21:14:47.122955    1332 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 21:14:47.142800    1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-073300
	I0315 21:14:47.546205    1332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65160 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-073300\id_rsa Username:docker}
	I0315 21:14:47.789045    1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I0315 21:14:47.921672    1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 21:14:48.109386    1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0315 21:14:48.239020    1332 provision.go:86] duration metric: configureAuth took 1.8842594s
	I0315 21:14:48.239020    1332 ubuntu.go:193] setting minikube options for container-runtime
	I0315 21:14:48.240260    1332 config.go:182] Loaded profile config "pause-073300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0315 21:14:48.259529    1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-073300
	I0315 21:14:48.656348    1332 main.go:141] libmachine: Using SSH client type: native
	I0315 21:14:48.657193    1332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil>  [] 0s} 127.0.0.1 65160 <nil> <nil>}
	I0315 21:14:48.657193    1332 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0315 21:14:48.987281    1332 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0315 21:14:48.987326    1332 ubuntu.go:71] root file system type: overlay
	I0315 21:14:48.987574    1332 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0315 21:14:49.002065    1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-073300
	I0315 21:14:49.395383    1332 main.go:141] libmachine: Using SSH client type: native
	I0315 21:14:49.397305    1332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil>  [] 0s} 127.0.0.1 65160 <nil> <nil>}
	I0315 21:14:49.397536    1332 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0315 21:14:49.831693    1332 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0315 21:14:49.850458    1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-073300
	I0315 21:14:50.237476    1332 main.go:141] libmachine: Using SSH client type: native
	I0315 21:14:50.239498    1332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil>  [] 0s} 127.0.0.1 65160 <nil> <nil>}
	I0315 21:14:50.239498    1332 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0315 21:14:50.598885    1332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 21:14:50.598941    1332 machine.go:91] provisioned docker machine in 5.9396163s
	I0315 21:14:50.598941    1332 start.go:300] post-start starting for "pause-073300" (driver="docker")
	I0315 21:14:50.599011    1332 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 21:14:50.622720    1332 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 21:14:50.640775    1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-073300
	I0315 21:14:51.043106    1332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65160 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-073300\id_rsa Username:docker}
	I0315 21:14:51.292852    1332 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 21:14:51.314432    1332 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0315 21:14:51.314968    1332 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0315 21:14:51.315037    1332 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0315 21:14:51.315037    1332 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0315 21:14:51.315096    1332 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0315 21:14:51.315677    1332 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0315 21:14:51.316716    1332 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem -> 88122.pem in /etc/ssl/certs
	I0315 21:14:51.340291    1332 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 21:14:51.396575    1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem --> /etc/ssl/certs/88122.pem (1708 bytes)
	I0315 21:14:51.535578    1332 start.go:303] post-start completed in 936.5687ms
	I0315 21:14:51.567809    1332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 21:14:51.587272    1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-073300
	I0315 21:14:51.961977    1332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65160 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-073300\id_rsa Username:docker}
	I0315 21:14:52.188251    1332 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0315 21:14:52.212356    1332 fix.go:57] fixHost completed within 8.0194642s
	I0315 21:14:52.212356    1332 start.go:83] releasing machines lock for "pause-073300", held for 8.0194642s
	I0315 21:14:52.225798    1332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-073300
	I0315 21:14:52.626752    1332 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 21:14:52.642874    1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-073300
	I0315 21:14:52.648822    1332 ssh_runner.go:195] Run: cat /version.json
	I0315 21:14:52.667173    1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-073300
	I0315 21:14:53.024855    1332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65160 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-073300\id_rsa Username:docker}
	I0315 21:14:53.056225    1332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65160 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-073300\id_rsa Username:docker}
	I0315 21:14:53.415628    1332 ssh_runner.go:195] Run: systemctl --version
	I0315 21:14:53.456301    1332 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0315 21:14:53.496589    1332 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0315 21:14:53.535844    1332 start.go:407] unable to name loopback interface in dockerConfigureNetworkPlugin: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0315 21:14:53.554811    1332 ssh_runner.go:195] Run: which cri-dockerd
	I0315 21:14:53.594396    1332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0315 21:14:53.630699    1332 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0315 21:14:53.714760    1332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 21:14:53.747510    1332 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0315 21:14:53.747558    1332 start.go:485] detecting cgroup driver to use...
	I0315 21:14:53.747624    1332 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0315 21:14:53.747780    1332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 21:14:53.826353    1332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0315 21:14:53.897205    1332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0315 21:14:53.940137    1332 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0315 21:14:53.959192    1332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0315 21:14:54.008505    1332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0315 21:14:54.085644    1332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0315 21:14:54.174515    1332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0315 21:14:54.227887    1332 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 21:14:54.296922    1332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0315 21:14:54.377501    1332 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 21:14:54.430855    1332 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 21:14:54.493740    1332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 21:14:55.075764    1332 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0315 21:15:01.331130    1332 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (6.2553782s)
	I0315 21:15:01.331250    1332 start.go:485] detecting cgroup driver to use...
	I0315 21:15:01.331319    1332 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0315 21:15:01.349148    1332 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0315 21:15:01.444720    1332 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0315 21:15:01.465261    1332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0315 21:15:01.534382    1332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 21:15:01.611076    1332 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0315 21:15:01.898149    1332 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0315 21:15:02.288028    1332 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0315 21:15:02.288028    1332 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0315 21:15:02.389281    1332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 21:15:02.871729    1332 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0315 21:15:17.505642    1332 ssh_runner.go:235] Completed: sudo systemctl restart docker: (14.6333883s)
	I0315 21:15:17.522785    1332 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0315 21:15:18.221285    1332 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0315 21:15:18.541325    1332 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0315 21:15:18.966238    1332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 21:15:19.243564    1332 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0315 21:15:19.304622    1332 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0315 21:15:19.321224    1332 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0315 21:15:19.349813    1332 start.go:553] Will wait 60s for crictl version
	I0315 21:15:19.372859    1332 ssh_runner.go:195] Run: which crictl
	I0315 21:15:19.411386    1332 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 21:15:19.725908    1332 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0315 21:15:19.746674    1332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0315 21:15:19.845681    1332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0315 21:15:20.161499    1332 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 23.0.1 ...
	I0315 21:15:20.172032    1332 cli_runner.go:164] Run: docker exec -t pause-073300 dig +short host.docker.internal
	I0315 21:15:20.783625    1332 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0315 21:15:20.810526    1332 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0315 21:15:20.863571    1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-073300
	I0315 21:15:21.186181    1332 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0315 21:15:21.197794    1332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0315 21:15:21.340421    1332 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.2
	registry.k8s.io/kube-scheduler:v1.26.2
	registry.k8s.io/kube-controller-manager:v1.26.2
	registry.k8s.io/kube-proxy:v1.26.2
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0315 21:15:21.340421    1332 docker.go:560] Images already preloaded, skipping extraction
	I0315 21:15:21.358806    1332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0315 21:15:21.530676    1332 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.2
	registry.k8s.io/kube-scheduler:v1.26.2
	registry.k8s.io/kube-controller-manager:v1.26.2
	registry.k8s.io/kube-proxy:v1.26.2
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0315 21:15:21.530902    1332 cache_images.go:84] Images are preloaded, skipping loading
	I0315 21:15:21.547617    1332 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0315 21:15:21.733812    1332 cni.go:84] Creating CNI manager for ""
	I0315 21:15:21.733925    1332 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0315 21:15:21.733972    1332 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0315 21:15:21.734031    1332 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-073300 NodeName:pause-073300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0315 21:15:21.734518    1332 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "pause-073300"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 21:15:21.734768    1332 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-073300 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.2 ClusterName:pause-073300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0315 21:15:21.756840    1332 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
	I0315 21:15:21.947537    1332 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 21:15:21.972855    1332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 21:15:22.136836    1332 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (445 bytes)
	I0315 21:15:22.443146    1332 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 21:15:22.853268    1332 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2091 bytes)
	I0315 21:15:23.170849    1332 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0315 21:15:23.236998    1332 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-073300 for IP: 192.168.103.2
	I0315 21:15:23.237123    1332 certs.go:186] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:15:23.237580    1332 certs.go:195] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0315 21:15:23.237580    1332 certs.go:195] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0315 21:15:23.239738    1332 certs.go:311] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-073300\client.key
	I0315 21:15:23.240379    1332 certs.go:311] skipping minikube signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-073300\apiserver.key.33fce0b9
	I0315 21:15:23.240672    1332 certs.go:311] skipping aggregator signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-073300\proxy-client.key
	I0315 21:15:23.243463    1332 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812.pem (1338 bytes)
	W0315 21:15:23.243463    1332 certs.go:397] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812_empty.pem, impossibly tiny 0 bytes
	I0315 21:15:23.244058    1332 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0315 21:15:23.244358    1332 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0315 21:15:23.244739    1332 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0315 21:15:23.244739    1332 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0315 21:15:23.245646    1332 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem (1708 bytes)
	I0315 21:15:23.247378    1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-073300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0315 21:15:23.378538    1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-073300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 21:15:23.450108    1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-073300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 21:15:23.516385    1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-073300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 21:15:23.582912    1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 21:15:23.682413    1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 21:15:23.783573    1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 21:15:23.868684    1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 21:15:23.983032    1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 21:15:24.065791    1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812.pem --> /usr/share/ca-certificates/8812.pem (1338 bytes)
	I0315 21:15:24.153574    1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem --> /usr/share/ca-certificates/88122.pem (1708 bytes)
	I0315 21:15:24.274466    1332 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 21:15:24.353599    1332 ssh_runner.go:195] Run: openssl version
	I0315 21:15:24.408119    1332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88122.pem && ln -fs /usr/share/ca-certificates/88122.pem /etc/ssl/certs/88122.pem"
	I0315 21:15:24.468740    1332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88122.pem
	I0315 21:15:24.503273    1332 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 15 20:10 /usr/share/ca-certificates/88122.pem
	I0315 21:15:24.523667    1332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88122.pem
	I0315 21:15:24.585137    1332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/88122.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 21:15:24.626254    1332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 21:15:24.682792    1332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 21:15:24.703174    1332 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 15 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I0315 21:15:24.715257    1332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 21:15:24.756366    1332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 21:15:24.813357    1332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8812.pem && ln -fs /usr/share/ca-certificates/8812.pem /etc/ssl/certs/8812.pem"
	I0315 21:15:24.856264    1332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8812.pem
	I0315 21:15:24.883599    1332 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 15 20:10 /usr/share/ca-certificates/8812.pem
	I0315 21:15:24.896180    1332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8812.pem
	I0315 21:15:24.938726    1332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8812.pem /etc/ssl/certs/51391683.0"
	I0315 21:15:24.973171    1332 kubeadm.go:401] StartCluster: {Name:pause-073300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:pause-073300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false s
torage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0315 21:15:24.983466    1332 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0315 21:15:25.051270    1332 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0315 21:15:25.086855    1332 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0315 21:15:25.086855    1332 kubeadm.go:633] restartCluster start
	I0315 21:15:25.100452    1332 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0315 21:15:25.132370    1332 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:25.140295    1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-073300
	I0315 21:15:25.398909    1332 kubeconfig.go:92] found "pause-073300" server: "https://127.0.0.1:65165"
	I0315 21:15:25.401452    1332 kapi.go:59] client config for pause-073300: &rest.Config{Host:"https://127.0.0.1:65165", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-073300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-073300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1deb720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0315 21:15:25.410936    1332 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0315 21:15:25.444192    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:25.456631    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0315 21:15:25.492162    1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:25.993789    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:26.001803    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0315 21:15:26.031907    1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:26.498885    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:26.520413    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0315 21:15:26.747568    1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:26.998577    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:27.005491    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0315 21:15:27.038573    1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:27.494449    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:27.510680    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0315 21:15:27.648998    1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:28.001209    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:28.016866    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0315 21:15:28.252092    1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:28.497926    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:28.519187    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0315 21:15:28.938518    1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:29.005873    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:29.022195    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0315 21:15:29.437505    1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:29.498878    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:29.509169    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:15:29.790027    1332 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6279/cgroup
	I0315 21:15:30.138061    1332 api_server.go:181] apiserver freezer: "20:freezer:/docker/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/kubepods/burstable/podd4d4a3bea62ddb6580910d9ea0aba8c6/0cb5567e32abb23418b668dfb851f2300e7fd6400791daeca39d46d8cf78cb16"
	I0315 21:15:30.167896    1332 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/kubepods/burstable/podd4d4a3bea62ddb6580910d9ea0aba8c6/0cb5567e32abb23418b668dfb851f2300e7fd6400791daeca39d46d8cf78cb16/freezer.state
	I0315 21:15:30.342651    1332 api_server.go:203] freezer state: "THAWED"
	I0315 21:15:30.342651    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:15:30.356716    1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": EOF
	I0315 21:15:30.356862    1332 retry.go:31] will retry after 297.564807ms: state is "Stopped"
	I0315 21:15:30.665183    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:15:30.674974    1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": EOF
	I0315 21:15:30.675152    1332 retry.go:31] will retry after 319.696256ms: state is "Stopped"
	I0315 21:15:31.004595    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:15:36.012455    1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0315 21:15:36.012558    1332 retry.go:31] will retry after 307.806183ms: state is "Stopped"
	I0315 21:15:36.332781    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:15:41.339223    1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0315 21:15:41.339409    1332 retry.go:31] will retry after 386.719795ms: state is "Stopped"
	I0315 21:15:41.739620    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:15:44.046130    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 21:15:44.046265    1332 retry.go:31] will retry after 731.95405ms: https://127.0.0.1:65165/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 21:15:44.784826    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:15:44.930024    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0315 21:15:44.930412    1332 kubeadm.go:608] needs reconfigure: apiserver error: https://127.0.0.1:65165/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0315 21:15:44.930412    1332 kubeadm.go:1120] stopping kube-system containers ...
	I0315 21:15:44.948612    1332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0315 21:15:45.444192    1332 docker.go:456] Stopping containers: [e3043962e5ef 6824568445c6 95e8431f8447 1f51fce69c22 c2ad60cad36d 0cb5567e32ab 51f04c53d355 a35da045d30f e92b1a5d6d0c e722cf7eda6b ed67a04efb8e 923853eff8e2 ac037b4a1329 ed570c25cf43 b0affa37d140 e5c85f584ed4 494a4383ddf0 aad97e15cb29 f5a744fc67d3 f03ec5c0e911 6b7373bd3644 d14ab3906f22 689b4ee40db7 c7d2681135fb 3ebfa7ac8c42 5f2ce6a254a2 f48bc2a716a0]
	I0315 21:15:45.468741    1332 ssh_runner.go:195] Run: docker stop e3043962e5ef 6824568445c6 95e8431f8447 1f51fce69c22 c2ad60cad36d 0cb5567e32ab 51f04c53d355 a35da045d30f e92b1a5d6d0c e722cf7eda6b ed67a04efb8e 923853eff8e2 ac037b4a1329 ed570c25cf43 b0affa37d140 e5c85f584ed4 494a4383ddf0 aad97e15cb29 f5a744fc67d3 f03ec5c0e911 6b7373bd3644 d14ab3906f22 689b4ee40db7 c7d2681135fb 3ebfa7ac8c42 5f2ce6a254a2 f48bc2a716a0
	I0315 21:15:55.263337    1332 ssh_runner.go:235] Completed: docker stop e3043962e5ef 6824568445c6 95e8431f8447 1f51fce69c22 c2ad60cad36d 0cb5567e32ab 51f04c53d355 a35da045d30f e92b1a5d6d0c e722cf7eda6b ed67a04efb8e 923853eff8e2 ac037b4a1329 ed570c25cf43 b0affa37d140 e5c85f584ed4 494a4383ddf0 aad97e15cb29 f5a744fc67d3 f03ec5c0e911 6b7373bd3644 d14ab3906f22 689b4ee40db7 c7d2681135fb 3ebfa7ac8c42 5f2ce6a254a2 f48bc2a716a0: (9.7945662s)
	I0315 21:15:55.280007    1332 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 21:15:55.667015    1332 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 21:15:55.884955    1332 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Mar 15 21:13 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Mar 15 21:13 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Mar 15 21:14 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Mar 15 21:13 /etc/kubernetes/scheduler.conf
	
	I0315 21:15:55.906317    1332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 21:15:55.970490    1332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 21:15:56.077831    1332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 21:15:56.164837    1332 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:56.189369    1332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 21:15:56.278633    1332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 21:15:56.350783    1332 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:56.368651    1332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 21:15:56.472488    1332 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 21:15:56.554151    1332 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0315 21:15:56.554288    1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 21:15:56.838520    1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 21:15:58.821631    1332 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.9831146s)
	I0315 21:15:58.821631    1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 21:15:59.241679    1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 21:15:59.531884    1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 21:15:59.837145    1332 api_server.go:51] waiting for apiserver process to appear ...
	I0315 21:15:59.862394    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:00.562737    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:01.047261    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:01.561853    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:02.057572    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:02.554491    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:03.060987    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:03.560744    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:04.058096    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:04.574094    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:05.054883    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:05.558867    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:06.064030    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:06.559451    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:06.836193    1332 api_server.go:71] duration metric: took 6.999061s to wait for apiserver process to appear ...
	I0315 21:16:06.836348    1332 api_server.go:87] waiting for apiserver healthz status ...
	I0315 21:16:06.836472    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:06.844702    1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": EOF
	I0315 21:16:07.349930    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:07.360047    1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": EOF
	I0315 21:16:07.852770    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:12.856341    1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0315 21:16:13.355164    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:13.531052    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 21:16:13.531052    1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 21:16:13.856894    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:13.948093    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0315 21:16:13.948207    1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0315 21:16:14.353756    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:14.444021    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0315 21:16:14.444582    1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0315 21:16:14.850032    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:14.881729    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0315 21:16:14.881822    1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0315 21:16:15.359619    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:15.458273    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0315 21:16:15.458359    1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0315 21:16:15.846895    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:15.875897    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 200:
	ok
	I0315 21:16:15.909269    1332 api_server.go:140] control plane version: v1.26.2
	I0315 21:16:15.909297    1332 api_server.go:130] duration metric: took 9.0729659s to wait for apiserver health ...
	I0315 21:16:15.909353    1332 cni.go:84] Creating CNI manager for ""
	I0315 21:16:15.909353    1332 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0315 21:16:15.912744    1332 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 21:16:15.925415    1332 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 21:16:15.965847    1332 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 21:16:16.079955    1332 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 21:16:16.096342    1332 system_pods.go:59] 6 kube-system pods found
	I0315 21:16:16.096342    1332 system_pods.go:61] "coredns-787d4945fb-2q246" [13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 21:16:16.096342    1332 system_pods.go:61] "etcd-pause-073300" [08b62e5b-2e8e-45a6-976f-51c9524724a0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0315 21:16:16.096342    1332 system_pods.go:61] "kube-apiserver-pause-073300" [f7f5b883-f6de-4ad7-adc7-c48ad03ab3c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0315 21:16:16.096342    1332 system_pods.go:61] "kube-controller-manager-pause-073300" [2691065d-e6be-4ff6-902d-6d474453c5e9] Running
	I0315 21:16:16.096342    1332 system_pods.go:61] "kube-proxy-m4md5" [428ae579-2b68-4526-a2b0-d8bb5922870f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 21:16:16.096342    1332 system_pods.go:61] "kube-scheduler-pause-073300" [0cdbd626-152a-47fb-a2d9-08d22e639996] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0315 21:16:16.096342    1332 system_pods.go:74] duration metric: took 16.2168ms to wait for pod list to return data ...
	I0315 21:16:16.096342    1332 node_conditions.go:102] verifying NodePressure condition ...
	I0315 21:16:16.105140    1332 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0315 21:16:16.105226    1332 node_conditions.go:123] node cpu capacity is 16
	I0315 21:16:16.105269    1332 node_conditions.go:105] duration metric: took 8.8846ms to run NodePressure ...
	I0315 21:16:16.105316    1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 21:16:17.333440    1332 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.2280887s)
	I0315 21:16:17.333615    1332 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0315 21:16:17.354686    1332 kubeadm.go:784] kubelet initialised
	I0315 21:16:17.354754    1332 kubeadm.go:785] duration metric: took 21.1391ms waiting for restarted kubelet to initialise ...
	I0315 21:16:17.354822    1332 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 21:16:17.435085    1332 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-2q246" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:19.521467    1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
	I0315 21:16:22.006711    1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
	I0315 21:16:24.016700    1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
	I0315 21:16:26.048179    1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
	I0315 21:16:28.050667    1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
	I0315 21:16:29.001447    1332 pod_ready.go:92] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:29.001447    1332 pod_ready.go:81] duration metric: took 11.5663842s waiting for pod "coredns-787d4945fb-2q246" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.001447    1332 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.028330    1332 pod_ready.go:92] pod "etcd-pause-073300" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:29.028330    1332 pod_ready.go:81] duration metric: took 26.8832ms waiting for pod "etcd-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.028330    1332 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.057628    1332 pod_ready.go:92] pod "kube-apiserver-pause-073300" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:29.057628    1332 pod_ready.go:81] duration metric: took 29.2978ms waiting for pod "kube-apiserver-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.057628    1332 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.092004    1332 pod_ready.go:92] pod "kube-controller-manager-pause-073300" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:29.092004    1332 pod_ready.go:81] duration metric: took 34.3758ms waiting for pod "kube-controller-manager-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.092004    1332 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-m4md5" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.131434    1332 pod_ready.go:92] pod "kube-proxy-m4md5" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:29.131486    1332 pod_ready.go:81] duration metric: took 39.482ms waiting for pod "kube-proxy-m4md5" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.131486    1332 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.402295    1332 pod_ready.go:92] pod "kube-scheduler-pause-073300" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:29.402345    1332 pod_ready.go:81] duration metric: took 270.8098ms waiting for pod "kube-scheduler-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.402345    1332 pod_ready.go:38] duration metric: took 12.0475003s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 21:16:29.402386    1332 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 21:16:29.426130    1332 ops.go:34] apiserver oom_adj: -16
	I0315 21:16:29.426187    1332 kubeadm.go:637] restartCluster took 1m4.338895s
	I0315 21:16:29.426266    1332 kubeadm.go:403] StartCluster complete in 1m4.4532784s
	I0315 21:16:29.426351    1332 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:16:29.426601    1332 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0315 21:16:29.429857    1332 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:16:29.432982    1332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0315 21:16:29.432982    1332 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0315 21:16:29.433680    1332 config.go:182] Loaded profile config "pause-073300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0315 21:16:29.438415    1332 out.go:177] * Enabled addons: 
	I0315 21:16:29.443738    1332 addons.go:499] enable addons completed in 10.8462ms: enabled=[]
	I0315 21:16:29.452842    1332 kapi.go:59] client config for pause-073300: &rest.Config{Host:"https://127.0.0.1:65165", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-073300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-073300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1deb720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0315 21:16:29.467764    1332 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-073300" context rescaled to 1 replicas
	I0315 21:16:29.467764    1332 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0315 21:16:29.470858    1332 out.go:177] * Verifying Kubernetes components...
	I0315 21:16:29.484573    1332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 21:16:29.761590    1332 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0315 21:16:29.775423    1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-073300
	I0315 21:16:30.117208    1332 node_ready.go:35] waiting up to 6m0s for node "pause-073300" to be "Ready" ...
	I0315 21:16:30.134817    1332 node_ready.go:49] node "pause-073300" has status "Ready":"True"
	I0315 21:16:30.134886    1332 node_ready.go:38] duration metric: took 17.4789ms waiting for node "pause-073300" to be "Ready" ...
	I0315 21:16:30.135066    1332 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 21:16:30.162562    1332 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-2q246" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:30.219441    1332 pod_ready.go:92] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:30.219583    1332 pod_ready.go:81] duration metric: took 57.0207ms waiting for pod "coredns-787d4945fb-2q246" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:30.219583    1332 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:30.608418    1332 pod_ready.go:92] pod "etcd-pause-073300" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:30.608458    1332 pod_ready.go:81] duration metric: took 388.876ms waiting for pod "etcd-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:30.608458    1332 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:31.017074    1332 pod_ready.go:92] pod "kube-apiserver-pause-073300" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:31.017074    1332 pod_ready.go:81] duration metric: took 408.6175ms waiting for pod "kube-apiserver-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:31.017074    1332 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:31.395349    1332 pod_ready.go:92] pod "kube-controller-manager-pause-073300" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:31.395349    1332 pod_ready.go:81] duration metric: took 378.275ms waiting for pod "kube-controller-manager-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:31.395349    1332 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m4md5" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:31.792495    1332 pod_ready.go:92] pod "kube-proxy-m4md5" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:31.792495    1332 pod_ready.go:81] duration metric: took 397.1476ms waiting for pod "kube-proxy-m4md5" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:31.792495    1332 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:32.219569    1332 pod_ready.go:92] pod "kube-scheduler-pause-073300" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:32.220120    1332 pod_ready.go:81] duration metric: took 427.0739ms waiting for pod "kube-scheduler-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:32.220120    1332 pod_ready.go:38] duration metric: took 2.0850147s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 21:16:32.220120    1332 api_server.go:51] waiting for apiserver process to appear ...
	I0315 21:16:32.232971    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:32.332638    1332 api_server.go:71] duration metric: took 2.8648801s to wait for apiserver process to appear ...
	I0315 21:16:32.332638    1332 api_server.go:87] waiting for apiserver healthz status ...
	I0315 21:16:32.332638    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:32.362918    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 200:
	ok
	I0315 21:16:32.430820    1332 api_server.go:140] control plane version: v1.26.2
	I0315 21:16:32.430820    1332 api_server.go:130] duration metric: took 98.1819ms to wait for apiserver health ...
	I0315 21:16:32.430820    1332 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 21:16:32.455349    1332 system_pods.go:59] 6 kube-system pods found
	I0315 21:16:32.455486    1332 system_pods.go:61] "coredns-787d4945fb-2q246" [13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc] Running
	I0315 21:16:32.455486    1332 system_pods.go:61] "etcd-pause-073300" [08b62e5b-2e8e-45a6-976f-51c9524724a0] Running
	I0315 21:16:32.455544    1332 system_pods.go:61] "kube-apiserver-pause-073300" [f7f5b883-f6de-4ad7-adc7-c48ad03ab3c0] Running
	I0315 21:16:32.455642    1332 system_pods.go:61] "kube-controller-manager-pause-073300" [2691065d-e6be-4ff6-902d-6d474453c5e9] Running
	I0315 21:16:32.455642    1332 system_pods.go:61] "kube-proxy-m4md5" [428ae579-2b68-4526-a2b0-d8bb5922870f] Running
	I0315 21:16:32.455685    1332 system_pods.go:61] "kube-scheduler-pause-073300" [0cdbd626-152a-47fb-a2d9-08d22e639996] Running
	I0315 21:16:32.455785    1332 system_pods.go:74] duration metric: took 24.9239ms to wait for pod list to return data ...
	I0315 21:16:32.455785    1332 default_sa.go:34] waiting for default service account to be created ...
	I0315 21:16:32.637154    1332 default_sa.go:45] found service account: "default"
	I0315 21:16:32.637301    1332 default_sa.go:55] duration metric: took 181.4813ms for default service account to be created ...
	I0315 21:16:32.637301    1332 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 21:16:32.844031    1332 system_pods.go:86] 6 kube-system pods found
	I0315 21:16:32.844031    1332 system_pods.go:89] "coredns-787d4945fb-2q246" [13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc] Running
	I0315 21:16:32.844031    1332 system_pods.go:89] "etcd-pause-073300" [08b62e5b-2e8e-45a6-976f-51c9524724a0] Running
	I0315 21:16:32.844031    1332 system_pods.go:89] "kube-apiserver-pause-073300" [f7f5b883-f6de-4ad7-adc7-c48ad03ab3c0] Running
	I0315 21:16:32.844031    1332 system_pods.go:89] "kube-controller-manager-pause-073300" [2691065d-e6be-4ff6-902d-6d474453c5e9] Running
	I0315 21:16:32.844031    1332 system_pods.go:89] "kube-proxy-m4md5" [428ae579-2b68-4526-a2b0-d8bb5922870f] Running
	I0315 21:16:32.844031    1332 system_pods.go:89] "kube-scheduler-pause-073300" [0cdbd626-152a-47fb-a2d9-08d22e639996] Running
	I0315 21:16:32.844031    1332 system_pods.go:126] duration metric: took 206.7296ms to wait for k8s-apps to be running ...
	I0315 21:16:32.844031    1332 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 21:16:32.858698    1332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 21:16:32.902525    1332 system_svc.go:56] duration metric: took 56.9493ms WaitForService to wait for kubelet.
	I0315 21:16:32.902598    1332 kubeadm.go:578] duration metric: took 3.4348415s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0315 21:16:32.902669    1332 node_conditions.go:102] verifying NodePressure condition ...
	I0315 21:16:33.016156    1332 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0315 21:16:33.016241    1332 node_conditions.go:123] node cpu capacity is 16
	I0315 21:16:33.016278    1332 node_conditions.go:105] duration metric: took 113.5716ms to run NodePressure ...
	I0315 21:16:33.016316    1332 start.go:228] waiting for startup goroutines ...
	I0315 21:16:33.016316    1332 start.go:233] waiting for cluster config update ...
	I0315 21:16:33.016351    1332 start.go:242] writing updated cluster config ...
	I0315 21:16:33.039378    1332 ssh_runner.go:195] Run: rm -f paused
	I0315 21:16:33.289071    1332 start.go:555] kubectl: 1.18.2, cluster: 1.26.2 (minor skew: 8)
	I0315 21:16:33.292949    1332 out.go:177] 
	W0315 21:16:33.295479    1332 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.26.2.
	! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.26.2.
	I0315 21:16:33.297706    1332 out.go:177]   - Want kubectl v1.26.2? Try 'minikube kubectl -- get pods -A'
	I0315 21:16:33.301501    1332 out.go:177] * Done! kubectl is now configured to use "pause-073300" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-073300
helpers_test.go:235: (dbg) docker inspect pause-073300:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af",
	        "Created": "2023-03-15T21:12:57.6447279Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 235036,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-15T21:13:02.5343301Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c2228ee73b919fe6986a8848f936a81a268f0e56f65fc402964f596a1336d16b",
	        "ResolvConfPath": "/var/lib/docker/containers/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/hostname",
	        "HostsPath": "/var/lib/docker/containers/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/hosts",
	        "LogPath": "/var/lib/docker/containers/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af-json.log",
	        "Name": "/pause-073300",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-073300:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-073300",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/daf89d89f16ecbd4935a7a509e1ebcf567d4c7992b1f3939dc1333e423f6287b-init/diff:/var/lib/docker/overlay2/dd4a105805e89f3781ba34ad53d0a86096f0b864f9eade98210c90b3db11e614/diff:/var/lib/docker/overlay2/85f05c8966ab20f24eea0cadf9b702a2755c1a700aee4fcacd3754b8fa7f8a91/diff:/var/lib/docker/overlay2/b2c60f67ad52427067a519010db687573f6b5b01526e9e9493d88bbb3dcaf069/diff:/var/lib/docker/overlay2/ca870ef465e163b19b7e0ef24b89c201cc7cfe12753a6ca6a515827067e4fc98/diff:/var/lib/docker/overlay2/f55801eccf5ae4ff6206eaaaca361e1d9bfadc5759172bb8072e835b0002419b/diff:/var/lib/docker/overlay2/3da247e6db7b0c502d6067a49cfb704f596cd5fe9a3a874f6888ae9cc2373233/diff:/var/lib/docker/overlay2/f0dcb6d169a751860b7c097c666afe3d8fba3aac20d90e95b7f85913b7d1fda7/diff:/var/lib/docker/overlay2/a0c906b3378b625d84a7a2d043cc982545599c488b72767e2b4822211ddee871/diff:/var/lib/docker/overlay2/1380f7e23737bb69bab3e1c3b37fff4a603a1096ba1e984f2808fdb9fc5664b7/diff:/var/lib/docker/overlay2/f09380
dffb1afe5e97599b999b6d05a1d0b97490fc3afb897018955e3589ddf0/diff:/var/lib/docker/overlay2/12504a4aab3b43a1624555c565265eb2a252f3cc64b5942527ead795f1b46742/diff:/var/lib/docker/overlay2/2f17a40545e098dc56e6667d78dfde761f9ae57ff4c2dcab77a6135abc29f050/diff:/var/lib/docker/overlay2/378841db26151d8a66f60032a9366d4572aeb0fd0db1c1af9429abf5d7b6ab82/diff:/var/lib/docker/overlay2/14ee7241acf63b7e56e700bccdbcc29bd6530ebd357799238641498ccb978bc1/diff:/var/lib/docker/overlay2/0e384b8276413ac21818038eacaf3da54a8ac43c6ccef737b2c4e70e568fe287/diff:/var/lib/docker/overlay2/66beff05ea52aebfaea737c44ff3da16f742e7e2577ccea2c1fe954085a1e7f4/diff:/var/lib/docker/overlay2/fe7b0a2c7d3f1889e322a156881a5066e5e784dc1888fbf172b4beada499c14a/diff:/var/lib/docker/overlay2/bf3118300571672a5d3b839bbbbaa42516c05f16305f5b944d88d38687857207/diff:/var/lib/docker/overlay2/d1326cf983418efce550556b370f71d9b4d9e6671a9267ea6433967dcafff129/diff:/var/lib/docker/overlay2/cc4d1369146bbaac53f23e5cb8e072c195a8c109396c1f305d9a90dbcb491d62/diff:/var/lib/d
ocker/overlay2/20a6a00f4e15b51632a8a26911faf3243318c3e7bd9266fe9c926ca6070526a8/diff:/var/lib/docker/overlay2/6a6bfa0be9e2c1a0aa9fa555897c7f62f7c23b782a2117560731f10b833692a0/diff:/var/lib/docker/overlay2/0d9ed53179f81c8d2e276195863f6ac1ba99be69a7217caa97c19fe1121b0d38/diff:/var/lib/docker/overlay2/f9e70916967de3d00f48ca66d15ec3af34bd3980334b7ecb8950be0a5aee2e5e/diff:/var/lib/docker/overlay2/8a3ebe53f0b355704a58efda53f1dcf8ae0099f0a7947c748e7c447044baed05/diff:/var/lib/docker/overlay2/f6841f5c7deb52ba587f1365fd0bc48fe4334bd9678f4846740d9e4f3df386c4/diff:/var/lib/docker/overlay2/7729eb6c4bb6c79eae923e1946b180dcdb33aa85c259a8a21b46994e681a329f/diff:/var/lib/docker/overlay2/86ccbe980393e3c2dea4faf1f5b45fa86ac8f47190cf4fb3ebb23d5fd6687d44/diff:/var/lib/docker/overlay2/48b28921897a52ef79e37091b3d3df88fa4e01604e3a63d7e3dbbd72e551797c/diff:/var/lib/docker/overlay2/b9f9c70e4945260452936930e508cb1e7d619927da4230c7b792e5908a93ec46/diff:/var/lib/docker/overlay2/39f84637efc722da57b6de997d757e8709af3d48f8cba3da8848d3674aa
7ba4d/diff:/var/lib/docker/overlay2/9d81ba80e5128eb395bcffc7b56889c3d18172c222e637671a4b3c12c0a72afd/diff:/var/lib/docker/overlay2/03583facbdd50e79e467eb534dfcbe3d5e47aef4b25195138b3c0134ebd7f07e/diff:/var/lib/docker/overlay2/38e991cef8fb39c883da64e57775232fd1df5a4c67f32565e747b7363f336632/diff:/var/lib/docker/overlay2/0e0ebf6f489a93585842ec4fef7d044da67fd8a9504f91fe03cc03c6928134b8/diff:/var/lib/docker/overlay2/dedec87bbba9e6a1a68a159c167cac4c10a25918fa3d00630d6570db2ca290eb/diff:/var/lib/docker/overlay2/dc09130400d9f44a28862a6484b44433985893e9a8f49df62c38c0bd6b5e4e2c/diff:/var/lib/docker/overlay2/f00d229f6d9f2960571b2e1c365f30bd680b686c0d4569b5190c072a626c6811/diff:/var/lib/docker/overlay2/1a9993f098965bbd60b6e43b5998e4fcae02f81d65cc863bd8f6e29f7e2b8426/diff:/var/lib/docker/overlay2/500f950cf1835311103c129d3c1487e8e6b917ad928788ee14527cd8342c544f/diff:/var/lib/docker/overlay2/018feb310d5aa53cd6175c82f8ca56d22b3c1ad26ae5cfda5f6e3b56ca3919e6/diff:/var/lib/docker/overlay2/f84198610374e88e1ba6917bf70c8d9cea6ede
68b5fb4852c7eebcb536a12a83/diff",
	                "MergedDir": "/var/lib/docker/overlay2/daf89d89f16ecbd4935a7a509e1ebcf567d4c7992b1f3939dc1333e423f6287b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/daf89d89f16ecbd4935a7a509e1ebcf567d4c7992b1f3939dc1333e423f6287b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/daf89d89f16ecbd4935a7a509e1ebcf567d4c7992b1f3939dc1333e423f6287b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-073300",
	                "Source": "/var/lib/docker/volumes/pause-073300/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-073300",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-073300",
	                "name.minikube.sigs.k8s.io": "pause-073300",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c465f6b5b8ea2cbabcd582f953a2ee6755ba6c0b6db6fbc3b931a291aafae975",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65160"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65161"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65163"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65164"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65165"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c465f6b5b8ea",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-073300": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8be68eee5af2",
	                        "pause-073300"
	                    ],
	                    "NetworkID": "e97288cdb8ed8d3c843be70e49117f727e8c88772310c60f193237b2f3d2167f",
	                    "EndpointID": "7dff20190b061cfe2a0b46f43c2f9a085fd94900413646e6b074cab27b5ac50e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-073300 -n pause-073300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-073300 -n pause-073300: (2.0626046s)
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-073300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-073300 logs -n 25: (3.6909868s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                         Args                         |         Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p cilium-899600 sudo                                | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | systemctl cat cri-docker                             |                          |                   |         |                     |                     |
	|         | --no-pager                                           |                          |                   |         |                     |                     |
	| ssh     | -p cilium-899600 sudo cat                            | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                          |                   |         |                     |                     |
	| ssh     | -p cilium-899600 sudo cat                            | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                          |                   |         |                     |                     |
	| ssh     | -p cilium-899600 sudo                                | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | cri-dockerd --version                                |                          |                   |         |                     |                     |
	| ssh     | -p cilium-899600 sudo                                | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | systemctl status containerd                          |                          |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                          |                   |         |                     |                     |
	| ssh     | -p cilium-899600 sudo                                | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | systemctl cat containerd                             |                          |                   |         |                     |                     |
	|         | --no-pager                                           |                          |                   |         |                     |                     |
	| ssh     | -p cilium-899600 sudo cat                            | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                          |                   |         |                     |                     |
	| ssh     | -p cilium-899600 sudo cat                            | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | /etc/containerd/config.toml                          |                          |                   |         |                     |                     |
	| ssh     | -p cilium-899600 sudo                                | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | containerd config dump                               |                          |                   |         |                     |                     |
	| ssh     | -p cilium-899600 sudo                                | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | systemctl status crio --all                          |                          |                   |         |                     |                     |
	|         | --full --no-pager                                    |                          |                   |         |                     |                     |
	| ssh     | -p cilium-899600 sudo                                | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                          |                   |         |                     |                     |
	| ssh     | -p cilium-899600 sudo find                           | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                          |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                          |                   |         |                     |                     |
	| ssh     | -p cilium-899600 sudo crio                           | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | config                                               |                          |                   |         |                     |                     |
	| delete  | -p cilium-899600                                     | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | 15 Mar 23 21:13 UTC |
	| start   | -p force-systemd-env-387800                          | force-systemd-env-387800 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | 15 Mar 23 21:15 UTC |
	|         | --memory=2048                                        |                          |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                               |                          |                   |         |                     |                     |
	|         | --driver=docker                                      |                          |                   |         |                     |                     |
	| ssh     | cert-options-298900 ssh                              | cert-options-298900      | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | 15 Mar 23 21:13 UTC |
	|         | openssl x509 -text -noout -in                        |                          |                   |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                |                          |                   |         |                     |                     |
	| ssh     | -p cert-options-298900 -- sudo                       | cert-options-298900      | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | 15 Mar 23 21:13 UTC |
	|         | cat /etc/kubernetes/admin.conf                       |                          |                   |         |                     |                     |
	| delete  | -p cert-options-298900                               | cert-options-298900      | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | 15 Mar 23 21:13 UTC |
	| delete  | -p cert-expiration-023900                            | cert-expiration-023900   | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | 15 Mar 23 21:13 UTC |
	| start   | -p old-k8s-version-103800                            | old-k8s-version-103800   | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | --memory=2200                                        |                          |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                          |                   |         |                     |                     |
	|         | --kvm-network=default                                |                          |                   |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                          |                   |         |                     |                     |
	|         | --disable-driver-mounts                              |                          |                   |         |                     |                     |
	|         | --keep-context=false                                 |                          |                   |         |                     |                     |
	|         | --driver=docker                                      |                          |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                          |                   |         |                     |                     |
	| start   | -p no-preload-470000                                 | no-preload-470000        | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | --memory=2200                                        |                          |                   |         |                     |                     |
	|         | --alsologtostderr                                    |                          |                   |         |                     |                     |
	|         | --wait=true --preload=false                          |                          |                   |         |                     |                     |
	|         | --driver=docker                                      |                          |                   |         |                     |                     |
	|         | --kubernetes-version=v1.26.2                         |                          |                   |         |                     |                     |
	| start   | -p pause-073300                                      | pause-073300             | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:14 UTC | 15 Mar 23 21:16 UTC |
	|         | --alsologtostderr -v=1                               |                          |                   |         |                     |                     |
	|         | --driver=docker                                      |                          |                   |         |                     |                     |
	| ssh     | force-systemd-env-387800                             | force-systemd-env-387800 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:15 UTC | 15 Mar 23 21:15 UTC |
	|         | ssh docker info --format                             |                          |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                                    |                          |                   |         |                     |                     |
	| delete  | -p force-systemd-env-387800                          | force-systemd-env-387800 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:15 UTC | 15 Mar 23 21:15 UTC |
	| start   | -p embed-certs-348900                                | embed-certs-348900       | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:15 UTC |                     |
	|         | --memory=2200                                        |                          |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                          |                   |         |                     |                     |
	|         | --embed-certs --driver=docker                        |                          |                   |         |                     |                     |
	|         | --kubernetes-version=v1.26.2                         |                          |                   |         |                     |                     |
	|---------|------------------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/15 21:15:28
	Running on machine: minikube1
	Binary: Built with gc go1.20.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 21:15:28.142992   11164 out.go:296] Setting OutFile to fd 1840 ...
	I0315 21:15:28.223401   11164 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0315 21:15:28.223401   11164 out.go:309] Setting ErrFile to fd 1952...
	I0315 21:15:28.223401   11164 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0315 21:15:28.262334   11164 out.go:303] Setting JSON to false
	I0315 21:15:28.267297   11164 start.go:125] hostinfo: {"hostname":"minikube1","uptime":24330,"bootTime":1678890597,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2728 Build 19045.2728","kernelVersion":"10.0.19045.2728 Build 19045.2728","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0315 21:15:28.269446   11164 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0315 21:15:28.271110   11164 out.go:177] * [embed-certs-348900] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2728 Build 19045.2728
	I0315 21:15:28.276466   11164 notify.go:220] Checking for updates...
	I0315 21:15:28.279987   11164 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0315 21:15:28.284307   11164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 21:15:28.287394   11164 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0315 21:15:28.289437   11164 out.go:177]   - MINIKUBE_LOCATION=16056
	I0315 21:15:28.293408   11164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 21:15:27.107652    3304 kubeadm.go:322] [apiclient] All control plane components are healthy after 22.564526 seconds
	I0315 21:15:27.107905    3304 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 21:15:27.174450    3304 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 21:15:27.850318    3304 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 21:15:27.850318    3304 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-103800 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0315 21:15:28.451439    3304 kubeadm.go:322] [bootstrap-token] Using token: 1vsykl.s1ca43i7aq3le3xp
	I0315 21:15:28.454827    3304 out.go:204]   - Configuring RBAC rules ...
	I0315 21:15:28.455102    3304 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 21:15:28.540595    3304 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 21:15:28.708614    3304 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 21:15:28.750604    3304 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 21:15:28.768374    3304 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 21:15:28.296206   11164 config.go:182] Loaded profile config "no-preload-470000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0315 21:15:28.296901   11164 config.go:182] Loaded profile config "old-k8s-version-103800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0315 21:15:28.296901   11164 config.go:182] Loaded profile config "pause-073300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0315 21:15:28.297434   11164 driver.go:365] Setting default libvirt URI to qemu:///system
	I0315 21:15:28.716358   11164 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0315 21:15:28.733128   11164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 21:15:30.097371   11164 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.3641858s)
	I0315 21:15:30.098315   11164 info.go:266] docker info: {ID:5XVN:YLWI:D57U:VRY6:Z2T2:XT44:UTQY:SUTG:X4EL:3KBQ:R56A:SLJU Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:88 OomKillDisable:true NGoroutines:71 SystemTime:2023-03-15 21:15:28.9739466 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path:C:\Program Files\Docker\cli-plu
gins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0315 21:15:30.101949   11164 out.go:177] * Using the docker driver based on user configuration
	I0315 21:15:25.993789    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:26.001803    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0315 21:15:26.031907    1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:26.498885    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:26.520413    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0315 21:15:26.747568    1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:26.998577    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:27.005491    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0315 21:15:27.038573    1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:27.494449    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:27.510680    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0315 21:15:27.648998    1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:28.001209    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:28.016866    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0315 21:15:28.252092    1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:28.497926    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:28.519187    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0315 21:15:28.938518    1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:29.005873    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:29.022195    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0315 21:15:29.437505    1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:29.498878    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:29.509169    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:15:29.790027    1332 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6279/cgroup
	I0315 21:15:30.138061    1332 api_server.go:181] apiserver freezer: "20:freezer:/docker/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/kubepods/burstable/podd4d4a3bea62ddb6580910d9ea0aba8c6/0cb5567e32abb23418b668dfb851f2300e7fd6400791daeca39d46d8cf78cb16"
	I0315 21:15:30.167896    1332 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/kubepods/burstable/podd4d4a3bea62ddb6580910d9ea0aba8c6/0cb5567e32abb23418b668dfb851f2300e7fd6400791daeca39d46d8cf78cb16/freezer.state
	I0315 21:15:30.342651    1332 api_server.go:203] freezer state: "THAWED"
	I0315 21:15:30.342651    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:15:30.356716    1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": EOF
	I0315 21:15:30.356862    1332 retry.go:31] will retry after 297.564807ms: state is "Stopped"
	I0315 21:15:28.433538    4576 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.6-0 | docker load": (19.494237s)
	I0315 21:15:28.433538    4576 cache_images.go:315] Transferred and loaded C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.6-0 from cache
	I0315 21:15:28.433538    4576 cache_images.go:123] Successfully loaded all cached images
	I0315 21:15:28.434115    4576 cache_images.go:92] LoadImages completed in 1m0.6105675s
	I0315 21:15:28.453600    4576 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0315 21:15:28.577481    4576 cni.go:84] Creating CNI manager for ""
	I0315 21:15:28.577553    4576 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0315 21:15:28.577553    4576 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0315 21:15:28.577617    4576 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-470000 NodeName:no-preload-470000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0315 21:15:28.577869    4576 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "no-preload-470000"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 21:15:28.577869    4576 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=no-preload-470000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.2 ClusterName:no-preload-470000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0315 21:15:28.591514    4576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
	I0315 21:15:28.640201    4576 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.26.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.26.2': No such file or directory
	
	Initiating transfer...
	I0315 21:15:28.658006    4576 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.26.2
	I0315 21:15:28.718165    4576 download.go:107] Downloading: https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.26.2/kubectl
	I0315 21:15:28.718374    4576 download.go:107] Downloading: https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.26.2/kubeadm
	I0315 21:15:28.718374    4576 download.go:107] Downloading: https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.26.2/kubelet
	I0315 21:15:30.110051    4576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.2/kubeadm
	I0315 21:15:30.131361    4576 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.26.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.26.2/kubeadm': No such file or directory
	I0315 21:15:30.131361    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.26.2/kubeadm --> /var/lib/minikube/binaries/v1.26.2/kubeadm (46768128 bytes)
	I0315 21:15:30.168761    4576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.2/kubectl
	I0315 21:15:30.671927    4576 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.26.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.26.2/kubectl': No such file or directory
	I0315 21:15:30.672203    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.26.2/kubectl --> /var/lib/minikube/binaries/v1.26.2/kubectl (48029696 bytes)
	I0315 21:15:30.105668   11164 start.go:296] selected driver: docker
	I0315 21:15:30.105668   11164 start.go:857] validating driver "docker" against <nil>
	I0315 21:15:30.105668   11164 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 21:15:30.254283   11164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 21:15:31.493680   11164 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.2393516s)
	I0315 21:15:31.494207   11164 info.go:266] docker info: {ID:5XVN:YLWI:D57U:VRY6:Z2T2:XT44:UTQY:SUTG:X4EL:3KBQ:R56A:SLJU Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:88 OomKillDisable:true NGoroutines:71 SystemTime:2023-03-15 21:15:30.5680929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path:C:\Program Files\Docker\cli-plu
gins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0315 21:15:31.494635   11164 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0315 21:15:31.496393   11164 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 21:15:31.499064   11164 out.go:177] * Using Docker Desktop driver with root privileges
	I0315 21:15:31.501160   11164 cni.go:84] Creating CNI manager for ""
	I0315 21:15:31.501160   11164 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0315 21:15:31.501160   11164 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0315 21:15:31.501160   11164 start_flags.go:319] config:
	{Name:embed-certs-348900 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:embed-certs-348900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0315 21:15:31.504086   11164 out.go:177] * Starting control plane node embed-certs-348900 in cluster embed-certs-348900
	I0315 21:15:31.506766   11164 cache.go:120] Beginning downloading kic base image for docker with docker
	I0315 21:15:31.510102   11164 out.go:177] * Pulling base image ...
	I0315 21:15:31.512871   11164 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0315 21:15:31.512871   11164 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f in local docker daemon
	I0315 21:15:31.513118   11164 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4
	I0315 21:15:31.513179   11164 cache.go:57] Caching tarball of preloaded images
	I0315 21:15:31.513395   11164 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0315 21:15:31.513395   11164 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.2 on docker
	I0315 21:15:31.514113   11164 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\config.json ...
	I0315 21:15:31.514113   11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\config.json: {Name:mk3060d08febbde2429fe9a2baf8bbeb029a2640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:15:31.875381   11164 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f in local docker daemon, skipping pull
	I0315 21:15:31.875429   11164 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f exists in daemon, skipping load
	I0315 21:15:31.875429   11164 cache.go:193] Successfully downloaded all kic artifacts
	I0315 21:15:31.875429   11164 start.go:364] acquiring machines lock for embed-certs-348900: {Name:mk2351699223ac71a23a94063928109d9d9f576a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 21:15:31.875429   11164 start.go:368] acquired machines lock for "embed-certs-348900" in 0s
	I0315 21:15:31.876003   11164 start.go:93] Provisioning new machine with config: &{Name:embed-certs-348900 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:embed-certs-348900 Namespace:default APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0315 21:15:31.876319   11164 start.go:125] createHost starting for "" (driver="docker")
	I0315 21:15:31.880060   11164 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0315 21:15:31.880999   11164 start.go:159] libmachine.API.Create for "embed-certs-348900" (driver="docker")
	I0315 21:15:31.881063   11164 client.go:168] LocalClient.Create starting
	I0315 21:15:31.881279   11164 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0315 21:15:31.881815   11164 main.go:141] libmachine: Decoding PEM data...
	I0315 21:15:31.881932   11164 main.go:141] libmachine: Parsing certificate...
	I0315 21:15:31.881975   11164 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0315 21:15:31.881975   11164 main.go:141] libmachine: Decoding PEM data...
	I0315 21:15:31.881975   11164 main.go:141] libmachine: Parsing certificate...
	I0315 21:15:31.896077   11164 cli_runner.go:164] Run: docker network inspect embed-certs-348900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0315 21:15:32.230585   11164 cli_runner.go:211] docker network inspect embed-certs-348900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0315 21:15:32.246557   11164 network_create.go:281] running [docker network inspect embed-certs-348900] to gather additional debugging logs...
	I0315 21:15:32.246658   11164 cli_runner.go:164] Run: docker network inspect embed-certs-348900
	W0315 21:15:32.585407   11164 cli_runner.go:211] docker network inspect embed-certs-348900 returned with exit code 1
	I0315 21:15:32.585485   11164 network_create.go:284] error running [docker network inspect embed-certs-348900]: docker network inspect embed-certs-348900: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-348900
	I0315 21:15:32.585531   11164 network_create.go:286] output of [docker network inspect embed-certs-348900]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-348900
	
	** /stderr **
	I0315 21:15:32.596667   11164 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0315 21:15:32.951201   11164 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0315 21:15:32.983071   11164 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e77440}
	I0315 21:15:32.983153   11164 network_create.go:123] attempt to create docker network embed-certs-348900 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0315 21:15:32.994000   11164 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-348900 embed-certs-348900
	I0315 21:15:29.902489    3304 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0315 21:15:30.410425    3304 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0315 21:15:30.439154    3304 kubeadm.go:322] 
	I0315 21:15:30.439418    3304 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0315 21:15:30.439418    3304 kubeadm.go:322] 
	I0315 21:15:30.440591    3304 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0315 21:15:30.440591    3304 kubeadm.go:322] 
	I0315 21:15:30.440591    3304 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0315 21:15:30.440591    3304 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 21:15:30.440591    3304 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 21:15:30.441146    3304 kubeadm.go:322] 
	I0315 21:15:30.441302    3304 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0315 21:15:30.441302    3304 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 21:15:30.441302    3304 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 21:15:30.441302    3304 kubeadm.go:322] 
	I0315 21:15:30.442077    3304 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0315 21:15:30.442368    3304 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0315 21:15:30.442368    3304 kubeadm.go:322] 
	I0315 21:15:30.442768    3304 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 1vsykl.s1ca43i7aq3le3xp \
	I0315 21:15:30.442976    3304 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:bbf210a1ce3ae6ed86699fbddc86294be9a5c7abc143d537001f0a224592f716 \
	I0315 21:15:30.442976    3304 kubeadm.go:322]     --control-plane 	  
	I0315 21:15:30.442976    3304 kubeadm.go:322] 
	I0315 21:15:30.442976    3304 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0315 21:15:30.442976    3304 kubeadm.go:322] 
	I0315 21:15:30.442976    3304 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1vsykl.s1ca43i7aq3le3xp \
	I0315 21:15:30.442976    3304 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:bbf210a1ce3ae6ed86699fbddc86294be9a5c7abc143d537001f0a224592f716 
	I0315 21:15:30.449019    3304 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0315 21:15:30.449255    3304 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0315 21:15:30.449632    3304 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0315 21:15:30.449944    3304 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 21:15:30.449944    3304 cni.go:84] Creating CNI manager for ""
	I0315 21:15:30.449944    3304 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0315 21:15:30.449944    3304 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 21:15:30.475844    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:30.480125    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=11fd2e5d7d4b8360c6d8a8b2c2e61a071aa8631e minikube.k8s.io/name=old-k8s-version-103800 minikube.k8s.io/updated_at=2023_03_15T21_15_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:30.550685    3304 ops.go:34] apiserver oom_adj: -16
	I0315 21:15:30.665183    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:15:30.674974    1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": EOF
	I0315 21:15:30.675152    1332 retry.go:31] will retry after 319.696256ms: state is "Stopped"
	I0315 21:15:31.004595    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:15:31.271800    4576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 21:15:32.105850    4576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.2/kubelet
	I0315 21:15:32.876011    4576 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.26.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.26.2/kubelet': No such file or directory
	I0315 21:15:32.876276    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.26.2/kubelet --> /var/lib/minikube/binaries/v1.26.2/kubelet (121268472 bytes)
	W0315 21:15:33.333982   11164 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-348900 embed-certs-348900 returned with exit code 1
	W0315 21:15:33.334081   11164 network_create.go:148] failed to create docker network embed-certs-348900 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-348900 embed-certs-348900: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0315 21:15:33.334145   11164 network_create.go:115] failed to create docker network embed-certs-348900 192.168.58.0/24, will retry: subnet is taken
	I0315 21:15:33.379254   11164 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0315 21:15:33.406969   11164 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e10420}
	I0315 21:15:33.406969   11164 network_create.go:123] attempt to create docker network embed-certs-348900 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0315 21:15:33.416637   11164 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-348900 embed-certs-348900
	I0315 21:15:33.931710   11164 network_create.go:107] docker network embed-certs-348900 192.168.67.0/24 created
	I0315 21:15:33.931710   11164 kic.go:117] calculated static IP "192.168.67.2" for the "embed-certs-348900" container
	I0315 21:15:33.961692   11164 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0315 21:15:34.382414   11164 cli_runner.go:164] Run: docker volume create embed-certs-348900 --label name.minikube.sigs.k8s.io=embed-certs-348900 --label created_by.minikube.sigs.k8s.io=true
	I0315 21:15:34.716016   11164 oci.go:103] Successfully created a docker volume embed-certs-348900
	I0315 21:15:34.727122   11164 cli_runner.go:164] Run: docker run --rm --name embed-certs-348900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-348900 --entrypoint /usr/bin/test -v embed-certs-348900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f -d /var/lib
	I0315 21:15:34.549401    3304 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=11fd2e5d7d4b8360c6d8a8b2c2e61a071aa8631e minikube.k8s.io/name=old-k8s-version-103800 minikube.k8s.io/updated_at=2023_03_15T21_15_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (4.0692845s)
	I0315 21:15:34.549401    3304 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (4.0735649s)
	I0315 21:15:34.575936    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:35.677911    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:36.689764    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:37.173919    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:37.680647    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:38.677808    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:36.012455    1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0315 21:15:36.012558    1332 retry.go:31] will retry after 307.806183ms: state is "Stopped"
	I0315 21:15:36.332781    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:15:38.718404   11164 cli_runner.go:217] Completed: docker run --rm --name embed-certs-348900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-348900 --entrypoint /usr/bin/test -v embed-certs-348900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f -d /var/lib: (3.9912367s)
	I0315 21:15:38.718694   11164 oci.go:107] Successfully prepared a docker volume embed-certs-348900
	I0315 21:15:38.718763   11164 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0315 21:15:38.718763   11164 kic.go:190] Starting extracting preloaded images to volume ...
	I0315 21:15:38.735548   11164 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-348900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f -I lz4 -xf /preloaded.tar -C /extractDir
	I0315 21:15:39.684705    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:40.178045    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:41.173794    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:41.681379    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:42.683323    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:43.182131    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:41.339223    1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0315 21:15:41.339409    1332 retry.go:31] will retry after 386.719795ms: state is "Stopped"
	I0315 21:15:41.739620    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:15:44.046130    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 21:15:44.046265    1332 retry.go:31] will retry after 731.95405ms: https://127.0.0.1:65165/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 21:15:44.784826    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:15:44.930024    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0315 21:15:44.930412    1332 kubeadm.go:608] needs reconfigure: apiserver error: https://127.0.0.1:65165/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0315 21:15:44.930412    1332 kubeadm.go:1120] stopping kube-system containers ...
	I0315 21:15:44.948612    1332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0315 21:15:45.444192    1332 docker.go:456] Stopping containers: [e3043962e5ef 6824568445c6 95e8431f8447 1f51fce69c22 c2ad60cad36d 0cb5567e32ab 51f04c53d355 a35da045d30f e92b1a5d6d0c e722cf7eda6b ed67a04efb8e 923853eff8e2 ac037b4a1329 ed570c25cf43 b0affa37d140 e5c85f584ed4 494a4383ddf0 aad97e15cb29 f5a744fc67d3 f03ec5c0e911 6b7373bd3644 d14ab3906f22 689b4ee40db7 c7d2681135fb 3ebfa7ac8c42 5f2ce6a254a2 f48bc2a716a0]
	I0315 21:15:45.468741    1332 ssh_runner.go:195] Run: docker stop e3043962e5ef 6824568445c6 95e8431f8447 1f51fce69c22 c2ad60cad36d 0cb5567e32ab 51f04c53d355 a35da045d30f e92b1a5d6d0c e722cf7eda6b ed67a04efb8e 923853eff8e2 ac037b4a1329 ed570c25cf43 b0affa37d140 e5c85f584ed4 494a4383ddf0 aad97e15cb29 f5a744fc67d3 f03ec5c0e911 6b7373bd3644 d14ab3906f22 689b4ee40db7 c7d2681135fb 3ebfa7ac8c42 5f2ce6a254a2 f48bc2a716a0
	I0315 21:15:44.191394    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:45.685532    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:48.821222    3304 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.1356462s)
	I0315 21:15:48.821384    3304 kubeadm.go:1073] duration metric: took 18.3714764s to wait for elevateKubeSystemPrivileges.
	I0315 21:15:48.821384    3304 kubeadm.go:403] StartCluster complete in 50.2400255s
	I0315 21:15:48.821513    3304 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:15:48.821905    3304 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0315 21:15:48.825059    3304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:15:48.828077    3304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0315 21:15:48.828077    3304 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0315 21:15:48.828879    3304 config.go:182] Loaded profile config "old-k8s-version-103800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0315 21:15:48.828800    3304 addons.go:66] Setting storage-provisioner=true in profile "old-k8s-version-103800"
	I0315 21:15:48.829118    3304 addons.go:66] Setting default-storageclass=true in profile "old-k8s-version-103800"
	I0315 21:15:48.829179    3304 addons.go:228] Setting addon storage-provisioner=true in "old-k8s-version-103800"
	I0315 21:15:48.829179    3304 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-103800"
	I0315 21:15:48.829300    3304 host.go:66] Checking if "old-k8s-version-103800" exists ...
	I0315 21:15:48.878494    3304 cli_runner.go:164] Run: docker container inspect old-k8s-version-103800 --format={{.State.Status}}
	I0315 21:15:48.879545    3304 cli_runner.go:164] Run: docker container inspect old-k8s-version-103800 --format={{.State.Status}}
	I0315 21:15:49.358108    3304 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 21:15:50.124359    4576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 21:15:50.223962    4576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (449 bytes)
	I0315 21:15:50.297658    4576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 21:15:50.374920    4576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0315 21:15:50.483223    4576 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0315 21:15:50.503211    4576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 21:15:50.560996    4576 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000 for IP: 192.168.85.2
	I0315 21:15:50.561164    4576 certs.go:186] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:15:50.561830    4576 certs.go:195] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0315 21:15:50.562026    4576 certs.go:195] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0315 21:15:50.562749    4576 certs.go:315] generating minikube-user signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.key
	I0315 21:15:50.562749    4576 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt with IP's: []
	I0315 21:15:49.456354    3304 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 21:15:49.456907    3304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 21:15:49.478318    3304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-103800
	I0315 21:15:49.848514    3304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65315 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\old-k8s-version-103800\id_rsa Username:docker}
	I0315 21:15:49.872375    3304 addons.go:228] Setting addon default-storageclass=true in "old-k8s-version-103800"
	I0315 21:15:49.872629    3304 host.go:66] Checking if "old-k8s-version-103800" exists ...
	I0315 21:15:49.901466    3304 cli_runner.go:164] Run: docker container inspect old-k8s-version-103800 --format={{.State.Status}}
	I0315 21:15:49.934700    3304 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.1066246s)
	I0315 21:15:49.936184    3304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0315 21:15:50.250574    3304 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 21:15:50.250698    3304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 21:15:50.264810    3304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-103800
	I0315 21:15:50.363018    3304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 21:15:50.573127    3304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65315 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\old-k8s-version-103800\id_rsa Username:docker}
	I0315 21:15:51.185346    3304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 21:15:52.041249    3304 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-103800" context rescaled to 1 replicas
	I0315 21:15:52.041249    3304 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0315 21:15:52.050693    3304 out.go:177] * Verifying Kubernetes components...
	I0315 21:15:52.068989    3304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 21:15:52.931105    3304 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.994746s)
	I0315 21:15:52.931105    3304 start.go:921] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS's ConfigMap
	I0315 21:15:53.543980    3304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.1809688s)
	I0315 21:15:53.543980    3304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.3586386s)
	I0315 21:15:53.543980    3304 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.4749945s)
	I0315 21:15:53.547333    3304 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0315 21:15:53.551130    3304 addons.go:499] enable addons completed in 4.7230615s: enabled=[storage-provisioner default-storageclass]
	I0315 21:15:53.562222    3304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-103800
	I0315 21:15:53.866492    3304 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-103800" to be "Ready" ...
	I0315 21:15:53.933789    3304 node_ready.go:49] node "old-k8s-version-103800" has status "Ready":"True"
	I0315 21:15:53.933928    3304 node_ready.go:38] duration metric: took 67.3813ms waiting for node "old-k8s-version-103800" to be "Ready" ...
	I0315 21:15:53.933978    3304 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 21:15:53.954978    3304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-t9nj9" in "kube-system" namespace to be "Ready" ...
	I0315 21:15:55.263337    1332 ssh_runner.go:235] Completed: docker stop e3043962e5ef 6824568445c6 95e8431f8447 1f51fce69c22 c2ad60cad36d 0cb5567e32ab 51f04c53d355 a35da045d30f e92b1a5d6d0c e722cf7eda6b ed67a04efb8e 923853eff8e2 ac037b4a1329 ed570c25cf43 b0affa37d140 e5c85f584ed4 494a4383ddf0 aad97e15cb29 f5a744fc67d3 f03ec5c0e911 6b7373bd3644 d14ab3906f22 689b4ee40db7 c7d2681135fb 3ebfa7ac8c42 5f2ce6a254a2 f48bc2a716a0: (9.7945662s)
	I0315 21:15:55.280007    1332 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 21:15:50.791437    4576 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt ...
	I0315 21:15:50.811528    4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt: {Name:mk1a7714c10c13a7d5c8fb1098bc038f605ad5c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:15:50.813206    4576 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.key ...
	I0315 21:15:50.813206    4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.key: {Name:mk6d5b75048bc1f92c0f990335a0e77ae990113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:15:50.814115    4576 certs.go:315] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.key.43b9df8c
	I0315 21:15:50.814711    4576 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0315 21:15:51.462758    4576 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.crt.43b9df8c ...
	I0315 21:15:51.462758    4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.crt.43b9df8c: {Name:mkbe5d6759390ded2e92d33f951b55651f871d6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:15:51.465635    4576 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.key.43b9df8c ...
	I0315 21:15:51.465635    4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.key.43b9df8c: {Name:mkeabc19ce40a151a2335523f300cb2173b405a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:15:51.465984    4576 certs.go:333] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.crt.43b9df8c -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.crt
	I0315 21:15:51.467767    4576 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.key.43b9df8c -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.key
	I0315 21:15:51.475866    4576 certs.go:315] generating aggregator signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.key
	I0315 21:15:51.475866    4576 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.crt with IP's: []
	I0315 21:15:51.587728    4576 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.crt ...
	I0315 21:15:51.587834    4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.crt: {Name:mk7c62a1dda77e6dc05d2537ac317544e81f57a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:15:51.589765    4576 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.key ...
	I0315 21:15:51.589848    4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.key: {Name:mk8190fc7ddb34a4dc4e27e4845c7aee9bb89866 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:15:51.598260    4576 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812.pem (1338 bytes)
	W0315 21:15:51.600164    4576 certs.go:397] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812_empty.pem, impossibly tiny 0 bytes
	I0315 21:15:51.600164    4576 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0315 21:15:51.600164    4576 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0315 21:15:51.600849    4576 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0315 21:15:51.600849    4576 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0315 21:15:51.601444    4576 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem (1708 bytes)
	I0315 21:15:51.603533    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0315 21:15:51.706046    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 21:15:51.773521    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 21:15:51.835553    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 21:15:51.896596    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 21:15:51.961384    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 21:15:52.020772    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 21:15:52.161594    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 21:15:52.223729    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 21:15:52.295451    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812.pem --> /usr/share/ca-certificates/8812.pem (1338 bytes)
	I0315 21:15:52.368796    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem --> /usr/share/ca-certificates/88122.pem (1708 bytes)
	I0315 21:15:52.440447    4576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 21:15:52.501633    4576 ssh_runner.go:195] Run: openssl version
	I0315 21:15:52.539319    4576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8812.pem && ln -fs /usr/share/ca-certificates/8812.pem /etc/ssl/certs/8812.pem"
	I0315 21:15:52.596897    4576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8812.pem
	I0315 21:15:52.617219    4576 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 15 20:10 /usr/share/ca-certificates/8812.pem
	I0315 21:15:52.634012    4576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8812.pem
	I0315 21:15:52.676116    4576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8812.pem /etc/ssl/certs/51391683.0"
	I0315 21:15:52.732985    4576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88122.pem && ln -fs /usr/share/ca-certificates/88122.pem /etc/ssl/certs/88122.pem"
	I0315 21:15:52.795424    4576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88122.pem
	I0315 21:15:52.811657    4576 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 15 20:10 /usr/share/ca-certificates/88122.pem
	I0315 21:15:52.824204    4576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88122.pem
	I0315 21:15:52.868586    4576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/88122.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 21:15:52.920203    4576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 21:15:52.980456    4576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 21:15:52.999359    4576 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 15 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I0315 21:15:53.012117    4576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 21:15:53.068045    4576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 21:15:53.097602    4576 kubeadm.go:401] StartCluster: {Name:no-preload-470000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:no-preload-470000 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0315 21:15:53.106935    4576 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0315 21:15:53.188443    4576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0315 21:15:53.248153    4576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 21:15:53.292225    4576 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0315 21:15:53.310023    4576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 21:15:53.350373    4576 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 21:15:53.350373    4576 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0315 21:15:53.480709    4576 kubeadm.go:322] W0315 21:15:53.477710    2248 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0315 21:15:53.619484    4576 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0315 21:15:53.941137    4576 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 21:15:56.130859    3304 pod_ready.go:102] pod "coredns-5644d7b6d9-t9nj9" in "kube-system" namespace has status "Ready":"False"
	I0315 21:15:58.590590    3304 pod_ready.go:102] pod "coredns-5644d7b6d9-t9nj9" in "kube-system" namespace has status "Ready":"False"
	I0315 21:15:55.667015    1332 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 21:15:55.884955    1332 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Mar 15 21:13 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Mar 15 21:13 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Mar 15 21:14 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Mar 15 21:13 /etc/kubernetes/scheduler.conf
	
	I0315 21:15:55.906317    1332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 21:15:55.970490    1332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 21:15:56.077831    1332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 21:15:56.164837    1332 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:56.189369    1332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 21:15:56.278633    1332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 21:15:56.350783    1332 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:56.368651    1332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 21:15:56.472488    1332 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 21:15:56.554151    1332 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0315 21:15:56.554288    1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 21:15:56.838520    1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 21:15:58.821631    1332 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.9831146s)
	I0315 21:15:58.821631    1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 21:15:59.241679    1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 21:15:59.531884    1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 21:15:59.837145    1332 api_server.go:51] waiting for apiserver process to appear ...
	I0315 21:15:59.862394    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:00.562737    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:01.081569    3304 pod_ready.go:102] pod "coredns-5644d7b6d9-t9nj9" in "kube-system" namespace has status "Ready":"False"
	I0315 21:16:03.528471    3304 pod_ready.go:92] pod "coredns-5644d7b6d9-t9nj9" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:03.528551    3304 pod_ready.go:81] duration metric: took 9.5735907s waiting for pod "coredns-5644d7b6d9-t9nj9" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:03.528551    3304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cfcpx" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:03.557031    3304 pod_ready.go:92] pod "kube-proxy-cfcpx" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:03.557086    3304 pod_ready.go:81] duration metric: took 28.5355ms waiting for pod "kube-proxy-cfcpx" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:03.557086    3304 pod_ready.go:38] duration metric: took 9.623095s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 21:16:03.557194    3304 api_server.go:51] waiting for apiserver process to appear ...
	I0315 21:16:03.572979    3304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:03.613975    3304 api_server.go:71] duration metric: took 11.5727472s to wait for apiserver process to appear ...
	I0315 21:16:03.613975    3304 api_server.go:87] waiting for apiserver healthz status ...
	I0315 21:16:03.613975    3304 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65314/healthz ...
	I0315 21:16:03.643577    3304 api_server.go:278] https://127.0.0.1:65314/healthz returned 200:
	ok
	I0315 21:16:03.656457    3304 api_server.go:140] control plane version: v1.16.0
	I0315 21:16:03.656457    3304 api_server.go:130] duration metric: took 42.4823ms to wait for apiserver health ...
	I0315 21:16:03.656537    3304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 21:16:03.667107    3304 system_pods.go:59] 3 kube-system pods found
	I0315 21:16:03.667180    3304 system_pods.go:61] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:03.667180    3304 system_pods.go:61] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:03.667180    3304 system_pods.go:61] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:03.667180    3304 system_pods.go:74] duration metric: took 10.5957ms to wait for pod list to return data ...
	I0315 21:16:03.667180    3304 default_sa.go:34] waiting for default service account to be created ...
	I0315 21:16:03.676892    3304 default_sa.go:45] found service account: "default"
	I0315 21:16:03.677053    3304 default_sa.go:55] duration metric: took 9.8734ms for default service account to be created ...
	I0315 21:16:03.677104    3304 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 21:16:01.047261    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:01.561853    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:02.057572    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:02.554491    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:03.060987    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:03.560744    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:04.058096    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:04.574094    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:05.054883    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:05.558867    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:04.285721    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:04.285721    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:04.285721    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:04.285721    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:04.285721    3304 retry.go:31] will retry after 219.526595ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:04.529762    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:04.529762    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:04.529762    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:04.529762    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:04.529762    3304 retry.go:31] will retry after 379.322135ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:04.941567    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:04.941567    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:04.941567    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:04.941567    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:04.941567    3304 retry.go:31] will retry after 439.394592ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:05.410063    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:05.410190    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:05.410190    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:05.410246    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:05.410246    3304 retry.go:31] will retry after 547.53451ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:05.971998    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:05.971998    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:05.971998    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:05.971998    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:05.971998    3304 retry.go:31] will retry after 474.225372ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:06.466534    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:06.466718    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:06.466718    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:06.466718    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:06.466718    3304 retry.go:31] will retry after 680.585019ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:07.175871    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:07.175871    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:07.175871    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:07.175871    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:07.175871    3304 retry.go:31] will retry after 979.191711ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:08.550247    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:08.550247    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:08.550247    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:08.550247    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:08.550247    3304 retry.go:31] will retry after 1.232453731s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:06.064030    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:06.559451    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:06.836193    1332 api_server.go:71] duration metric: took 6.999061s to wait for apiserver process to appear ...
	I0315 21:16:06.836348    1332 api_server.go:87] waiting for apiserver healthz status ...
	I0315 21:16:06.836472    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:06.844702    1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": EOF
	I0315 21:16:07.349930    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:07.360047    1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": EOF
	I0315 21:16:07.852770    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:09.202438   11164 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-348900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f -I lz4 -xf /preloaded.tar -C /extractDir: (30.466496s)
	I0315 21:16:09.202651   11164 kic.go:199] duration metric: took 30.483946 seconds to extract preloaded images to volume
	I0315 21:16:09.210313   11164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 21:16:10.155940   11164 info.go:266] docker info: {ID:5XVN:YLWI:D57U:VRY6:Z2T2:XT44:UTQY:SUTG:X4EL:3KBQ:R56A:SLJU Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:85 OomKillDisable:true NGoroutines:71 SystemTime:2023-03-15 21:16:09.4164826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path:C:\Program Files\Docker\cli-plu
gins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0315 21:16:10.165464   11164 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0315 21:16:11.073846   11164 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-348900 --name embed-certs-348900 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-348900 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-348900 --network embed-certs-348900 --ip 192.168.67.2 --volume embed-certs-348900:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f
	I0315 21:16:12.556246   11164 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-348900 --name embed-certs-348900 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-348900 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-348900 --network embed-certs-348900 --ip 192.168.67.2 --volume embed-certs-348900:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f: (1.4822642s)
	I0315 21:16:12.573402   11164 cli_runner.go:164] Run: docker container inspect embed-certs-348900 --format={{.State.Running}}
	I0315 21:16:12.899930   11164 cli_runner.go:164] Run: docker container inspect embed-certs-348900 --format={{.State.Status}}
	I0315 21:16:13.219648   11164 cli_runner.go:164] Run: docker exec embed-certs-348900 stat /var/lib/dpkg/alternatives/iptables
	I0315 21:16:09.817018    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:09.817099    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:09.817128    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:09.817171    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:09.817212    3304 retry.go:31] will retry after 1.174345338s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:11.034520    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:11.034666    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:11.034666    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:11.034666    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:11.034865    3304 retry.go:31] will retry after 1.617952037s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:12.678044    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:12.678093    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:12.678161    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:12.678161    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:12.678161    3304 retry.go:31] will retry after 2.664928648s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:12.856341    1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0315 21:16:13.355164    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:13.531052    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 21:16:13.531052    1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 21:16:13.856894    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:13.948093    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0315 21:16:13.948207    1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0315 21:16:14.353756    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:14.444021    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0315 21:16:14.444582    1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0315 21:16:14.850032    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:14.881729    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0315 21:16:14.881822    1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0315 21:16:15.359619    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:15.458273    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0315 21:16:15.458359    1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0315 21:16:15.846895    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:15.875897    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 200:
	ok
	I0315 21:16:15.909269    1332 api_server.go:140] control plane version: v1.26.2
	I0315 21:16:15.909297    1332 api_server.go:130] duration metric: took 9.0729659s to wait for apiserver health ...
	I0315 21:16:15.909353    1332 cni.go:84] Creating CNI manager for ""
	I0315 21:16:15.909353    1332 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0315 21:16:15.912744    1332 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 21:16:13.756342   11164 oci.go:144] the created container "embed-certs-348900" has a running status.
	I0315 21:16:13.756477   11164 kic.go:221] Creating ssh key for kic: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa...
	I0315 21:16:14.119932   11164 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0315 21:16:14.639346   11164 cli_runner.go:164] Run: docker container inspect embed-certs-348900 --format={{.State.Status}}
	I0315 21:16:14.940713   11164 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0315 21:16:14.940713   11164 kic_runner.go:114] Args: [docker exec --privileged embed-certs-348900 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0315 21:16:15.500441   11164 kic.go:261] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa...
	I0315 21:16:16.178648   11164 cli_runner.go:164] Run: docker container inspect embed-certs-348900 --format={{.State.Status}}
	I0315 21:16:16.488888   11164 machine.go:88] provisioning docker machine ...
	I0315 21:16:16.488888   11164 ubuntu.go:169] provisioning hostname "embed-certs-348900"
	I0315 21:16:16.502911   11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
	I0315 21:16:16.840113   11164 main.go:141] libmachine: Using SSH client type: native
	I0315 21:16:16.856244   11164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil>  [] 0s} 127.0.0.1 65481 <nil> <nil>}
	I0315 21:16:16.856277   11164 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-348900 && echo "embed-certs-348900" | sudo tee /etc/hostname
	I0315 21:16:17.147013   11164 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-348900
	
	I0315 21:16:17.160758   11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
	I0315 21:16:17.464133   11164 main.go:141] libmachine: Using SSH client type: native
	I0315 21:16:17.465429   11164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil>  [] 0s} 127.0.0.1 65481 <nil> <nil>}
	I0315 21:16:17.465429   11164 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-348900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-348900/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-348900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 21:16:17.739135   11164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 21:16:17.739135   11164 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0315 21:16:17.739135   11164 ubuntu.go:177] setting up certificates
	I0315 21:16:17.739135   11164 provision.go:83] configureAuth start
	I0315 21:16:17.755889   11164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-348900
	I0315 21:16:18.035724   11164 provision.go:138] copyHostCerts
	I0315 21:16:18.036560   11164 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0315 21:16:18.036560   11164 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0315 21:16:18.037267   11164 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0315 21:16:18.038895   11164 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0315 21:16:18.038895   11164 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0315 21:16:18.039720   11164 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0315 21:16:18.041165   11164 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0315 21:16:18.041165   11164 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0315 21:16:18.041925   11164 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0315 21:16:18.042745   11164 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.embed-certs-348900 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-348900]
	I0315 21:16:15.383021    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:15.383097    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:15.383222    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:15.383222    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:15.383288    3304 retry.go:31] will retry after 2.578717787s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:17.995544    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:17.995544    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:17.995544    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:17.995544    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:17.997123    3304 retry.go:31] will retry after 3.689658526s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:15.925415    1332 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 21:16:15.965847    1332 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 21:16:16.079955    1332 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 21:16:16.096342    1332 system_pods.go:59] 6 kube-system pods found
	I0315 21:16:16.096342    1332 system_pods.go:61] "coredns-787d4945fb-2q246" [13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 21:16:16.096342    1332 system_pods.go:61] "etcd-pause-073300" [08b62e5b-2e8e-45a6-976f-51c9524724a0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0315 21:16:16.096342    1332 system_pods.go:61] "kube-apiserver-pause-073300" [f7f5b883-f6de-4ad7-adc7-c48ad03ab3c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0315 21:16:16.096342    1332 system_pods.go:61] "kube-controller-manager-pause-073300" [2691065d-e6be-4ff6-902d-6d474453c5e9] Running
	I0315 21:16:16.096342    1332 system_pods.go:61] "kube-proxy-m4md5" [428ae579-2b68-4526-a2b0-d8bb5922870f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 21:16:16.096342    1332 system_pods.go:61] "kube-scheduler-pause-073300" [0cdbd626-152a-47fb-a2d9-08d22e639996] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0315 21:16:16.096342    1332 system_pods.go:74] duration metric: took 16.2168ms to wait for pod list to return data ...
	I0315 21:16:16.096342    1332 node_conditions.go:102] verifying NodePressure condition ...
	I0315 21:16:16.105140    1332 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0315 21:16:16.105226    1332 node_conditions.go:123] node cpu capacity is 16
	I0315 21:16:16.105269    1332 node_conditions.go:105] duration metric: took 8.8846ms to run NodePressure ...
	I0315 21:16:16.105316    1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 21:16:17.333440    1332 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.2280887s)
	I0315 21:16:17.333615    1332 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0315 21:16:17.354686    1332 kubeadm.go:784] kubelet initialised
	I0315 21:16:17.354754    1332 kubeadm.go:785] duration metric: took 21.1391ms waiting for restarted kubelet to initialise ...
	I0315 21:16:17.354822    1332 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 21:16:17.435085    1332 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-2q246" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:19.521467    1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
	I0315 21:16:18.251532   11164 provision.go:172] copyRemoteCerts
	I0315 21:16:18.273974   11164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 21:16:18.283506   11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
	I0315 21:16:18.570902   11164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65481 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa Username:docker}
	I0315 21:16:18.768649   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0315 21:16:18.841686   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0315 21:16:18.905617   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 21:16:18.967699   11164 provision.go:86] duration metric: configureAuth took 1.2285308s
	I0315 21:16:18.967770   11164 ubuntu.go:193] setting minikube options for container-runtime
	I0315 21:16:18.968727   11164 config.go:182] Loaded profile config "embed-certs-348900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0315 21:16:18.979877   11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
	I0315 21:16:19.285905   11164 main.go:141] libmachine: Using SSH client type: native
	I0315 21:16:19.286914   11164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil>  [] 0s} 127.0.0.1 65481 <nil> <nil>}
	I0315 21:16:19.286979   11164 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0315 21:16:19.567687   11164 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0315 21:16:19.567687   11164 ubuntu.go:71] root file system type: overlay
	I0315 21:16:19.567687   11164 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0315 21:16:19.582813   11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
	I0315 21:16:19.874162   11164 main.go:141] libmachine: Using SSH client type: native
	I0315 21:16:19.875396   11164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil>  [] 0s} 127.0.0.1 65481 <nil> <nil>}
	I0315 21:16:19.875396   11164 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0315 21:16:20.174872   11164 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0315 21:16:20.188182   11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
	I0315 21:16:20.453718   11164 main.go:141] libmachine: Using SSH client type: native
	I0315 21:16:20.454944   11164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil>  [] 0s} 127.0.0.1 65481 <nil> <nil>}
	I0315 21:16:20.454944   11164 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0315 21:16:22.142486   11164 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-15 21:16:20.152689000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0315 21:16:22.142486   11164 machine.go:91] provisioned docker machine in 5.6536091s
	I0315 21:16:22.142486   11164 client.go:171] LocalClient.Create took 50.2614576s
	I0315 21:16:22.142486   11164 start.go:167] duration metric: libmachine.API.Create for "embed-certs-348900" took 50.2615841s
	I0315 21:16:22.142486   11164 start.go:300] post-start starting for "embed-certs-348900" (driver="docker")
	I0315 21:16:22.142486   11164 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 21:16:22.164869   11164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 21:16:22.176134   11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
	I0315 21:16:22.457317   11164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65481 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa Username:docker}
	I0315 21:16:22.664346   11164 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 21:16:22.686266   11164 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0315 21:16:22.686266   11164 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0315 21:16:22.686266   11164 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0315 21:16:22.686266   11164 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0315 21:16:22.686266   11164 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0315 21:16:22.686902   11164 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0315 21:16:22.688699   11164 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem -> 88122.pem in /etc/ssl/certs
	I0315 21:16:22.706595   11164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 21:16:22.738368   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem --> /etc/ssl/certs/88122.pem (1708 bytes)
	I0315 21:16:22.808162   11164 start.go:303] post-start completed in 665.6768ms
	I0315 21:16:22.820367   11164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-348900
	I0315 21:16:23.085450   11164 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\config.json ...
	I0315 21:16:23.099327   11164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 21:16:23.105640   11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
	I0315 21:16:21.705945    3304 system_pods.go:86] 4 kube-system pods found
	I0315 21:16:21.706010    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:21.706103    3304 system_pods.go:89] "etcd-old-k8s-version-103800" [177eccf1-ef20-41f5-9031-eca4485bea7b] Pending
	I0315 21:16:21.706103    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:21.706185    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:21.706219    3304 retry.go:31] will retry after 5.083561084s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:22.006711    1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
	I0315 21:16:24.016700    1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
	I0315 21:16:23.396840   11164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65481 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa Username:docker}
	I0315 21:16:23.581013   11164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0315 21:16:23.600663   11164 start.go:128] duration metric: createHost completed in 51.7244434s
	I0315 21:16:23.600663   11164 start.go:83] releasing machines lock for "embed-certs-348900", held for 51.7253337s
	I0315 21:16:23.612591   11164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-348900
	I0315 21:16:23.883432   11164 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 21:16:23.894275   11164 ssh_runner.go:195] Run: cat /version.json
	I0315 21:16:23.894535   11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
	I0315 21:16:23.897398   11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
	I0315 21:16:24.187980   11164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65481 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa Username:docker}
	I0315 21:16:24.211376   11164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65481 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa Username:docker}
	I0315 21:16:24.384184   11164 ssh_runner.go:195] Run: systemctl --version
	I0315 21:16:24.554870   11164 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0315 21:16:24.601965   11164 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0315 21:16:24.636442   11164 start.go:407] unable to name loopback interface in dockerConfigureNetworkPlugin: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0315 21:16:24.653193   11164 ssh_runner.go:195] Run: which cri-dockerd
	I0315 21:16:24.687918   11164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0315 21:16:24.720950   11164 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0315 21:16:24.782057   11164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 21:16:24.838659   11164 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0315 21:16:24.838782   11164 start.go:485] detecting cgroup driver to use...
	I0315 21:16:24.838782   11164 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0315 21:16:24.839372   11164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 21:16:24.907810   11164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0315 21:16:24.962942   11164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0315 21:16:24.999607   11164 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0315 21:16:25.016372   11164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0315 21:16:25.084691   11164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0315 21:16:25.123717   11164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0315 21:16:25.175564   11164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0315 21:16:25.220146   11164 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 21:16:25.283915   11164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0315 21:16:25.334938   11164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 21:16:25.388356   11164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 21:16:25.435298   11164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 21:16:25.641460   11164 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0315 21:16:25.860833   11164 start.go:485] detecting cgroup driver to use...
	I0315 21:16:25.861441   11164 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0315 21:16:25.882735   11164 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0315 21:16:25.939579   11164 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0315 21:16:25.960420   11164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0315 21:16:26.059890   11164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 21:16:26.183579   11164 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0315 21:16:26.466649   11164 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0315 21:16:26.677013   11164 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0315 21:16:26.677080   11164 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0315 21:16:26.756071   11164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 21:16:26.959814   11164 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0315 21:16:27.700313   11164 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0315 21:16:27.915578   11164 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0315 21:16:28.148265   11164 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0315 21:16:26.834333    3304 system_pods.go:86] 5 kube-system pods found
	I0315 21:16:26.834442    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:26.834494    3304 system_pods.go:89] "etcd-old-k8s-version-103800" [177eccf1-ef20-41f5-9031-eca4485bea7b] Running
	I0315 21:16:26.834494    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:26.834542    3304 system_pods.go:89] "kube-scheduler-old-k8s-version-103800" [2c673315-0d1e-4a5d-a5d7-738e38d7cf84] Pending
	I0315 21:16:26.834542    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:26.834542    3304 retry.go:31] will retry after 6.853083205s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:29.227662    4576 kubeadm.go:322] [init] Using Kubernetes version: v1.26.2
	I0315 21:16:29.227763    4576 kubeadm.go:322] [preflight] Running pre-flight checks
	I0315 21:16:29.227763    4576 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 21:16:29.227763    4576 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 21:16:29.227763    4576 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 21:16:29.229013    4576 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 21:16:29.233640    4576 out.go:204]   - Generating certificates and keys ...
	I0315 21:16:29.234315    4576 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0315 21:16:29.234315    4576 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0315 21:16:29.234315    4576 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0315 21:16:29.234862    4576 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0315 21:16:29.235050    4576 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0315 21:16:29.235155    4576 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0315 21:16:29.235331    4576 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0315 21:16:29.235774    4576 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-470000] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0315 21:16:29.235871    4576 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0315 21:16:29.235871    4576 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-470000] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0315 21:16:29.236566    4576 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0315 21:16:29.236865    4576 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0315 21:16:29.237080    4576 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0315 21:16:29.237437    4576 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 21:16:29.237659    4576 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 21:16:29.237841    4576 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 21:16:29.238095    4576 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 21:16:29.238325    4576 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 21:16:29.238639    4576 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 21:16:29.238966    4576 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 21:16:29.239000    4576 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0315 21:16:29.239299    4576 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 21:16:29.244122    4576 out.go:204]   - Booting up control plane ...
	I0315 21:16:29.244122    4576 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 21:16:29.244122    4576 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 21:16:29.244875    4576 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 21:16:29.245231    4576 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 21:16:29.245856    4576 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 21:16:29.246514    4576 kubeadm.go:322] [apiclient] All control plane components are healthy after 27.005043 seconds
	I0315 21:16:29.247464    4576 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 21:16:29.247889    4576 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 21:16:29.247889    4576 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 21:16:29.249317    4576 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-470000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 21:16:29.249647    4576 kubeadm.go:322] [bootstrap-token] Using token: g8jwe6.dtydkfj8fkgcjwxk
	I0315 21:16:29.253362    4576 out.go:204]   - Configuring RBAC rules ...
	I0315 21:16:29.253362    4576 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 21:16:29.253982    4576 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 21:16:29.254534    4576 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 21:16:29.254971    4576 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 21:16:29.255290    4576 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 21:16:29.255767    4576 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 21:16:29.256101    4576 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 21:16:29.256445    4576 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0315 21:16:29.256697    4576 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0315 21:16:29.256697    4576 kubeadm.go:322] 
	I0315 21:16:29.256697    4576 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0315 21:16:29.256697    4576 kubeadm.go:322] 
	I0315 21:16:29.256697    4576 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0315 21:16:29.257255    4576 kubeadm.go:322] 
	I0315 21:16:29.257312    4576 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0315 21:16:29.257312    4576 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 21:16:29.258206    4576 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 21:16:29.258206    4576 kubeadm.go:322] 
	I0315 21:16:29.258392    4576 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0315 21:16:29.258392    4576 kubeadm.go:322] 
	I0315 21:16:29.258392    4576 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 21:16:29.258392    4576 kubeadm.go:322] 
	I0315 21:16:29.259028    4576 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0315 21:16:29.259028    4576 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 21:16:29.259028    4576 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 21:16:29.259586    4576 kubeadm.go:322] 
	I0315 21:16:29.259793    4576 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 21:16:29.259793    4576 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0315 21:16:29.259793    4576 kubeadm.go:322] 
	I0315 21:16:29.260469    4576 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token g8jwe6.dtydkfj8fkgcjwxk \
	I0315 21:16:29.260726    4576 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:bbf210a1ce3ae6ed86699fbddc86294be9a5c7abc143d537001f0a224592f716 \
	I0315 21:16:29.260890    4576 kubeadm.go:322] 	--control-plane 
	I0315 21:16:29.260890    4576 kubeadm.go:322] 
	I0315 21:16:29.261169    4576 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0315 21:16:29.261228    4576 kubeadm.go:322] 
	I0315 21:16:29.261412    4576 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token g8jwe6.dtydkfj8fkgcjwxk \
	I0315 21:16:29.261412    4576 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:bbf210a1ce3ae6ed86699fbddc86294be9a5c7abc143d537001f0a224592f716 
	I0315 21:16:29.261412    4576 cni.go:84] Creating CNI manager for ""
	I0315 21:16:29.261412    4576 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0315 21:16:29.266347    4576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 21:16:28.373729   11164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 21:16:28.596843   11164 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0315 21:16:28.641503   11164 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0315 21:16:28.659715   11164 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0315 21:16:28.687449   11164 start.go:553] Will wait 60s for crictl version
	I0315 21:16:28.704098   11164 ssh_runner.go:195] Run: which crictl
	I0315 21:16:28.753769   11164 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 21:16:29.076356   11164 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0315 21:16:29.092004   11164 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0315 21:16:29.211116   11164 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0315 21:16:26.048179    1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
	I0315 21:16:28.050667    1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
	I0315 21:16:29.001447    1332 pod_ready.go:92] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:29.001447    1332 pod_ready.go:81] duration metric: took 11.5663842s waiting for pod "coredns-787d4945fb-2q246" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.001447    1332 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.028330    1332 pod_ready.go:92] pod "etcd-pause-073300" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:29.028330    1332 pod_ready.go:81] duration metric: took 26.8832ms waiting for pod "etcd-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.028330    1332 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.057628    1332 pod_ready.go:92] pod "kube-apiserver-pause-073300" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:29.057628    1332 pod_ready.go:81] duration metric: took 29.2978ms waiting for pod "kube-apiserver-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.057628    1332 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.092004    1332 pod_ready.go:92] pod "kube-controller-manager-pause-073300" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:29.092004    1332 pod_ready.go:81] duration metric: took 34.3758ms waiting for pod "kube-controller-manager-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.092004    1332 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-m4md5" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.131434    1332 pod_ready.go:92] pod "kube-proxy-m4md5" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:29.131486    1332 pod_ready.go:81] duration metric: took 39.482ms waiting for pod "kube-proxy-m4md5" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.131486    1332 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.402295    1332 pod_ready.go:92] pod "kube-scheduler-pause-073300" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:29.402345    1332 pod_ready.go:81] duration metric: took 270.8098ms waiting for pod "kube-scheduler-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.402345    1332 pod_ready.go:38] duration metric: took 12.0475003s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 21:16:29.402386    1332 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 21:16:29.426130    1332 ops.go:34] apiserver oom_adj: -16
	I0315 21:16:29.426187    1332 kubeadm.go:637] restartCluster took 1m4.338895s
	I0315 21:16:29.426266    1332 kubeadm.go:403] StartCluster complete in 1m4.4532784s
	I0315 21:16:29.426351    1332 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:16:29.426601    1332 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0315 21:16:29.429857    1332 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:16:29.432982    1332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0315 21:16:29.432982    1332 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0315 21:16:29.433680    1332 config.go:182] Loaded profile config "pause-073300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0315 21:16:29.438415    1332 out.go:177] * Enabled addons: 
	I0315 21:16:29.443738    1332 addons.go:499] enable addons completed in 10.8462ms: enabled=[]
	I0315 21:16:29.452842    1332 kapi.go:59] client config for pause-073300: &rest.Config{Host:"https://127.0.0.1:65165", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-073300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-073300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1deb720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0315 21:16:29.467764    1332 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-073300" context rescaled to 1 replicas
	I0315 21:16:29.467764    1332 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0315 21:16:29.470858    1332 out.go:177] * Verifying Kubernetes components...
	I0315 21:16:29.484573    1332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 21:16:29.761590    1332 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0315 21:16:29.775423    1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-073300
	I0315 21:16:30.117208    1332 node_ready.go:35] waiting up to 6m0s for node "pause-073300" to be "Ready" ...
	I0315 21:16:30.134817    1332 node_ready.go:49] node "pause-073300" has status "Ready":"True"
	I0315 21:16:30.134886    1332 node_ready.go:38] duration metric: took 17.4789ms waiting for node "pause-073300" to be "Ready" ...
	I0315 21:16:30.135066    1332 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 21:16:30.162562    1332 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-2q246" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:30.219441    1332 pod_ready.go:92] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:30.219583    1332 pod_ready.go:81] duration metric: took 57.0207ms waiting for pod "coredns-787d4945fb-2q246" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:30.219583    1332 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:30.608418    1332 pod_ready.go:92] pod "etcd-pause-073300" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:30.608458    1332 pod_ready.go:81] duration metric: took 388.876ms waiting for pod "etcd-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:30.608458    1332 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.286357    4576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 21:16:29.434851    4576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 21:16:29.759117    4576 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 21:16:29.777121    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:29.784090    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=11fd2e5d7d4b8360c6d8a8b2c2e61a071aa8631e minikube.k8s.io/name=no-preload-470000 minikube.k8s.io/updated_at=2023_03_15T21_16_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:29.333720   11164 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 23.0.1 ...
	I0315 21:16:29.346161   11164 cli_runner.go:164] Run: docker exec -t embed-certs-348900 dig +short host.docker.internal
	I0315 21:16:29.900879   11164 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0315 21:16:29.916562   11164 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0315 21:16:29.935552   11164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 21:16:29.995136   11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-348900
	I0315 21:16:30.338304   11164 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0315 21:16:30.350351   11164 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0315 21:16:30.410968   11164 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.2
	registry.k8s.io/kube-controller-manager:v1.26.2
	registry.k8s.io/kube-scheduler:v1.26.2
	registry.k8s.io/kube-proxy:v1.26.2
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0315 21:16:30.410997   11164 docker.go:560] Images already preloaded, skipping extraction
	I0315 21:16:30.423332   11164 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0315 21:16:30.503657   11164 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.2
	registry.k8s.io/kube-scheduler:v1.26.2
	registry.k8s.io/kube-controller-manager:v1.26.2
	registry.k8s.io/kube-proxy:v1.26.2
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0315 21:16:30.503657   11164 cache_images.go:84] Images are preloaded, skipping loading
	I0315 21:16:30.514842   11164 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0315 21:16:30.592454   11164 cni.go:84] Creating CNI manager for ""
	I0315 21:16:30.593071   11164 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0315 21:16:30.593126   11164 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0315 21:16:30.593164   11164 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-348900 NodeName:embed-certs-348900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0315 21:16:30.593164   11164 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-348900"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 21:16:30.593164   11164 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-348900 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.2 ClusterName:embed-certs-348900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0315 21:16:30.608458   11164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
	I0315 21:16:30.650429   11164 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 21:16:30.663574   11164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 21:16:30.692787   11164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
	I0315 21:16:30.740392   11164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 21:16:30.785258   11164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0315 21:16:30.856683   11164 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0315 21:16:30.874232   11164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 21:16:30.910227   11164 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900 for IP: 192.168.67.2
	I0315 21:16:30.910227   11164 certs.go:186] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:16:30.910959   11164 certs.go:195] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0315 21:16:30.910959   11164 certs.go:195] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0315 21:16:30.912090   11164 certs.go:315] generating minikube-user signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\client.key
	I0315 21:16:30.912245   11164 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\client.crt with IP's: []
	I0315 21:16:31.176322   11164 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\client.crt ...
	I0315 21:16:31.176322   11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\client.crt: {Name:mk3adaad25efd04206f4069d51ba11c764eb6365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:16:31.185180   11164 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\client.key ...
	I0315 21:16:31.186710   11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\client.key: {Name:mkf9f54f56133eba18d6e348fef5a1556121e000 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:16:31.186988   11164 certs.go:315] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.key.c7fa3a9e
	I0315 21:16:31.187994   11164 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0315 21:16:31.980645   11164 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.crt.c7fa3a9e ...
	I0315 21:16:31.980645   11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.crt.c7fa3a9e: {Name:mk2261dfadf80693084f767fa62cccae0b07268d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:16:31.987167   11164 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.key.c7fa3a9e ...
	I0315 21:16:31.987167   11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.key.c7fa3a9e: {Name:mk003ae0b84dcfe7543e40c97ad15121d53cc917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:16:31.988356   11164 certs.go:333] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.crt.c7fa3a9e -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.crt
	I0315 21:16:31.999575   11164 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.key.c7fa3a9e -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.key
	I0315 21:16:32.001372   11164 certs.go:315] generating aggregator signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.key
	I0315 21:16:32.001790   11164 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.crt with IP's: []
	I0315 21:16:32.228690   11164 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.crt ...
	I0315 21:16:32.228763   11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.crt: {Name:mk6cbb1c106aa2dec99a9338908a5ea76d5206ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:16:32.230290   11164 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.key ...
	I0315 21:16:32.230290   11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.key: {Name:mk5c3038fe2a59bd4ebdf1cb320d733f3de9b70e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:16:32.243236   11164 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812.pem (1338 bytes)
	W0315 21:16:32.243866   11164 certs.go:397] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812_empty.pem, impossibly tiny 0 bytes
	I0315 21:16:32.244089   11164 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0315 21:16:32.244671   11164 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0315 21:16:32.245081   11164 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0315 21:16:32.245162   11164 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0315 21:16:32.245850   11164 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem (1708 bytes)
	I0315 21:16:32.248063   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0315 21:16:32.321659   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 21:16:32.402505   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 21:16:32.491666   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 21:16:32.579600   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 21:16:32.651879   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 21:16:32.716051   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 21:16:32.797235   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 21:16:32.885295   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812.pem --> /usr/share/ca-certificates/8812.pem (1338 bytes)
	I0315 21:16:32.963869   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem --> /usr/share/ca-certificates/88122.pem (1708 bytes)
	I0315 21:16:33.029503   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 21:16:33.108304   11164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 21:16:33.169580   11164 ssh_runner.go:195] Run: openssl version
	I0315 21:16:33.195467   11164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 21:16:33.230164   11164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 21:16:31.017074    1332 pod_ready.go:92] pod "kube-apiserver-pause-073300" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:31.017074    1332 pod_ready.go:81] duration metric: took 408.6175ms waiting for pod "kube-apiserver-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:31.017074    1332 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:31.395349    1332 pod_ready.go:92] pod "kube-controller-manager-pause-073300" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:31.395349    1332 pod_ready.go:81] duration metric: took 378.275ms waiting for pod "kube-controller-manager-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:31.395349    1332 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m4md5" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:31.792495    1332 pod_ready.go:92] pod "kube-proxy-m4md5" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:31.792495    1332 pod_ready.go:81] duration metric: took 397.1476ms waiting for pod "kube-proxy-m4md5" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:31.792495    1332 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:32.219569    1332 pod_ready.go:92] pod "kube-scheduler-pause-073300" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:32.220120    1332 pod_ready.go:81] duration metric: took 427.0739ms waiting for pod "kube-scheduler-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:32.220120    1332 pod_ready.go:38] duration metric: took 2.0850147s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 21:16:32.220120    1332 api_server.go:51] waiting for apiserver process to appear ...
	I0315 21:16:32.232971    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:32.332638    1332 api_server.go:71] duration metric: took 2.8648801s to wait for apiserver process to appear ...
	I0315 21:16:32.332638    1332 api_server.go:87] waiting for apiserver healthz status ...
	I0315 21:16:32.332638    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:32.362918    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 200:
	ok
	I0315 21:16:32.430820    1332 api_server.go:140] control plane version: v1.26.2
	I0315 21:16:32.430820    1332 api_server.go:130] duration metric: took 98.1819ms to wait for apiserver health ...
	I0315 21:16:32.430820    1332 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 21:16:32.455349    1332 system_pods.go:59] 6 kube-system pods found
	I0315 21:16:32.455486    1332 system_pods.go:61] "coredns-787d4945fb-2q246" [13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc] Running
	I0315 21:16:32.455486    1332 system_pods.go:61] "etcd-pause-073300" [08b62e5b-2e8e-45a6-976f-51c9524724a0] Running
	I0315 21:16:32.455544    1332 system_pods.go:61] "kube-apiserver-pause-073300" [f7f5b883-f6de-4ad7-adc7-c48ad03ab3c0] Running
	I0315 21:16:32.455642    1332 system_pods.go:61] "kube-controller-manager-pause-073300" [2691065d-e6be-4ff6-902d-6d474453c5e9] Running
	I0315 21:16:32.455642    1332 system_pods.go:61] "kube-proxy-m4md5" [428ae579-2b68-4526-a2b0-d8bb5922870f] Running
	I0315 21:16:32.455685    1332 system_pods.go:61] "kube-scheduler-pause-073300" [0cdbd626-152a-47fb-a2d9-08d22e639996] Running
	I0315 21:16:32.455785    1332 system_pods.go:74] duration metric: took 24.9239ms to wait for pod list to return data ...
	I0315 21:16:32.455785    1332 default_sa.go:34] waiting for default service account to be created ...
	I0315 21:16:32.637154    1332 default_sa.go:45] found service account: "default"
	I0315 21:16:32.637301    1332 default_sa.go:55] duration metric: took 181.4813ms for default service account to be created ...
	I0315 21:16:32.637301    1332 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 21:16:32.844031    1332 system_pods.go:86] 6 kube-system pods found
	I0315 21:16:32.844031    1332 system_pods.go:89] "coredns-787d4945fb-2q246" [13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc] Running
	I0315 21:16:32.844031    1332 system_pods.go:89] "etcd-pause-073300" [08b62e5b-2e8e-45a6-976f-51c9524724a0] Running
	I0315 21:16:32.844031    1332 system_pods.go:89] "kube-apiserver-pause-073300" [f7f5b883-f6de-4ad7-adc7-c48ad03ab3c0] Running
	I0315 21:16:32.844031    1332 system_pods.go:89] "kube-controller-manager-pause-073300" [2691065d-e6be-4ff6-902d-6d474453c5e9] Running
	I0315 21:16:32.844031    1332 system_pods.go:89] "kube-proxy-m4md5" [428ae579-2b68-4526-a2b0-d8bb5922870f] Running
	I0315 21:16:32.844031    1332 system_pods.go:89] "kube-scheduler-pause-073300" [0cdbd626-152a-47fb-a2d9-08d22e639996] Running
	I0315 21:16:32.844031    1332 system_pods.go:126] duration metric: took 206.7296ms to wait for k8s-apps to be running ...
	I0315 21:16:32.844031    1332 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 21:16:32.858698    1332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 21:16:32.902525    1332 system_svc.go:56] duration metric: took 56.9493ms WaitForService to wait for kubelet.
	I0315 21:16:32.902598    1332 kubeadm.go:578] duration metric: took 3.4348415s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0315 21:16:32.902669    1332 node_conditions.go:102] verifying NodePressure condition ...
	I0315 21:16:33.016156    1332 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0315 21:16:33.016241    1332 node_conditions.go:123] node cpu capacity is 16
	I0315 21:16:33.016278    1332 node_conditions.go:105] duration metric: took 113.5716ms to run NodePressure ...
	I0315 21:16:33.016316    1332 start.go:228] waiting for startup goroutines ...
	I0315 21:16:33.016316    1332 start.go:233] waiting for cluster config update ...
	I0315 21:16:33.016351    1332 start.go:242] writing updated cluster config ...
	I0315 21:16:33.039378    1332 ssh_runner.go:195] Run: rm -f paused
	I0315 21:16:33.289071    1332 start.go:555] kubectl: 1.18.2, cluster: 1.26.2 (minor skew: 8)
	I0315 21:16:33.292949    1332 out.go:177] 
	W0315 21:16:33.295479    1332 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.26.2.
	I0315 21:16:33.297706    1332 out.go:177]   - Want kubectl v1.26.2? Try 'minikube kubectl -- get pods -A'
	I0315 21:16:33.301501    1332 out.go:177] * Done! kubectl is now configured to use "pause-073300" cluster and "default" namespace by default
	I0315 21:16:33.717595    3304 system_pods.go:86] 6 kube-system pods found
	I0315 21:16:33.717595    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:33.717595    3304 system_pods.go:89] "etcd-old-k8s-version-103800" [177eccf1-ef20-41f5-9031-eca4485bea7b] Running
	I0315 21:16:33.717595    3304 system_pods.go:89] "kube-controller-manager-old-k8s-version-103800" [eaf30ba4-8812-46a0-a046-aa376656a6eb] Pending
	I0315 21:16:33.717595    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:33.717595    3304 system_pods.go:89] "kube-scheduler-old-k8s-version-103800" [2c673315-0d1e-4a5d-a5d7-738e38d7cf84] Pending
	I0315 21:16:33.717595    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:33.717595    3304 retry.go:31] will retry after 7.396011667s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:31.527682    4576 ssh_runner.go:235] Completed: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": (1.7684448s)
	I0315 21:16:31.527682    4576 ops.go:34] apiserver oom_adj: -16
	I0315 21:16:31.527682    4576 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.26.2/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=11fd2e5d7d4b8360c6d8a8b2c2e61a071aa8631e minikube.k8s.io/name=no-preload-470000 minikube.k8s.io/updated_at=2023_03_15T21_16_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (1.7435955s)
	I0315 21:16:31.528138    4576 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.26.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (1.7509547s)
	I0315 21:16:31.546907    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:32.651879    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:33.157563    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:33.663575    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:34.656851    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:35.154601    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:35.655087    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2023-03-15 21:13:03 UTC, end at Wed 2023-03-15 21:16:38 UTC. --
	Mar 15 21:15:16 pause-073300 dockerd[5130]: time="2023-03-15T21:15:16.627341500Z" level=info msg="Loading containers: start."
	Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.180814100Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.293764400Z" level=info msg="Loading containers: done."
	Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.403670900Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.403801700Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.403820400Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.403829500Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.403946800Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.404077100Z" level=info msg="Daemon has completed initialization"
	Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.495876500Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Mar 15 21:15:17 pause-073300 systemd[1]: Started Docker Application Container Engine.
	Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.517552200Z" level=info msg="API listen on [::]:2376"
	Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.543627500Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.744692100Z" level=info msg="ignoring event" container=923853eff8e2f1864e6cfeaaffa94363f41b1b6d4244613c11e443d63b83f2f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.744884600Z" level=info msg="ignoring event" container=51f04c53d355992b4720b6fe3fb08eeebaffdc34d08262d17db9f24dc486c5f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.839172700Z" level=info msg="ignoring event" container=c2ad60cad36db8cde30e0a93c9255fa18e5df353a41dd6259afeb2043222ac62 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.840438600Z" level=info msg="ignoring event" container=e92b1a5d6d0c83422026888e04b4103fbb1a6aad2a814bd916a79bec7e5cb8d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.853642900Z" level=info msg="ignoring event" container=a35da045d30f2532ff1a5d88e989615ddf33df4f90272696757ca1b38c1a5eba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.927068700Z" level=info msg="ignoring event" container=ed67a04efb8ec818ab6782a05f9c291801a4458a1a0233c184aaf80f6bd8a373 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.927810400Z" level=info msg="ignoring event" container=95e8431f84471d1685f5d908a022789eb2644a61f5292997dfe306c1e9821c27 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 15 21:15:47 pause-073300 dockerd[5130]: time="2023-03-15T21:15:47.033930300Z" level=info msg="ignoring event" container=e722cf7eda6bbc9bcf453efc486e10336872ccd7d74dbeb91e51085c094b0009 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 15 21:15:47 pause-073300 dockerd[5130]: time="2023-03-15T21:15:47.128698500Z" level=info msg="ignoring event" container=1f51fce69c226f17529256ccf645edbf972854fc5f36bf524dd8bb1a98d65d9a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 15 21:15:47 pause-073300 dockerd[5130]: time="2023-03-15T21:15:47.434269500Z" level=info msg="ignoring event" container=6824568445c66b1f085e714f1a98df4ca1f40f4f7f67ed8f6069fbde15fd4b87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 15 21:15:51 pause-073300 dockerd[5130]: time="2023-03-15T21:15:51.189996200Z" level=info msg="ignoring event" container=e3043962e5ef540d703084ce9ddfc5f027eaab5ffceeeadfdff71e94f0eee0ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 15 21:15:55 pause-073300 dockerd[5130]: time="2023-03-15T21:15:55.079374900Z" level=info msg="ignoring event" container=0cb5567e32abb23418b668dfb851f2300e7fd6400791daeca39d46d8cf78cb16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	c3986aec6e000       5185b96f0becf       11 seconds ago       Running             coredns                   2                   a5bac8046c295
	b7b4669a56d5c       6f64e7135a6ec       13 seconds ago       Running             kube-proxy                2                   0e90c4b9c88b9
	aba41f11fdc83       fce326961ae2d       37 seconds ago       Running             etcd                      2                   f6e4108617808
	571d485669178       db8f409d9a5d7       37 seconds ago       Running             kube-scheduler            2                   cc13660f35478
	e6bb3d9a35ff0       240e201d5b0d8       37 seconds ago       Running             kube-controller-manager   3                   c468745ca2cf5
	88f9444587356       63d3239c3c159       37 seconds ago       Running             kube-apiserver            3                   5496303bf33fe
	e3043962e5ef5       5185b96f0becf       About a minute ago   Exited              coredns                   1                   51f04c53d3559
	6824568445c66       fce326961ae2d       About a minute ago   Exited              etcd                      1                   a35da045d30f2
	95e8431f84471       db8f409d9a5d7       About a minute ago   Exited              kube-scheduler            1                   923853eff8e2f
	1f51fce69c226       240e201d5b0d8       About a minute ago   Exited              kube-controller-manager   2                   e722cf7eda6bb
	c2ad60cad36db       6f64e7135a6ec       About a minute ago   Exited              kube-proxy                1                   e92b1a5d6d0c8
	0cb5567e32abb       63d3239c3c159       About a minute ago   Exited              kube-apiserver            2                   ed67a04efb8ec
	
	* 
	* ==> coredns [c3986aec6e00] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 8846d9ca81164c00fa03e78dfcf1a6846552cc49335bc010218794b8cfaf537759aa4b596e7dc20c0f618e8eb07603c0139662b99dfa3de45b176fbe7fb57ce1
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:39857 - 53557 "HINFO IN 4117550418294164078.6192551117797702913. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0986876s
	
	* 
	* ==> coredns [e3043962e5ef] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8846d9ca81164c00fa03e78dfcf1a6846552cc49335bc010218794b8cfaf537759aa4b596e7dc20c0f618e8eb07603c0139662b99dfa3de45b176fbe7fb57ce1
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:58165 - 40858 "HINFO IN 6114658028450402923.1632777775304523244. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0560197s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-073300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-073300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=11fd2e5d7d4b8360c6d8a8b2c2e61a071aa8631e
	                    minikube.k8s.io/name=pause-073300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_03_15T21_14_05_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 15 Mar 2023 21:13:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-073300
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 15 Mar 2023 21:16:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 15 Mar 2023 21:16:13 +0000   Wed, 15 Mar 2023 21:13:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 15 Mar 2023 21:16:13 +0000   Wed, 15 Mar 2023 21:13:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 15 Mar 2023 21:16:13 +0000   Wed, 15 Mar 2023 21:13:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 15 Mar 2023 21:16:13 +0000   Wed, 15 Mar 2023 21:14:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-073300
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 b1932dc991aa41bd806e459062926d45
	  System UUID:                b1932dc991aa41bd806e459062926d45
	  Boot ID:                    c49fbee3-0cdd-49eb-8984-31df821a263f
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.2
	  Kube-Proxy Version:         v1.26.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-787d4945fb-2q246                100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     2m21s
	  kube-system                 etcd-pause-073300                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         2m38s
	  kube-system                 kube-apiserver-pause-073300             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kube-controller-manager-pause-073300    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 kube-proxy-m4md5                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  kube-system                 kube-scheduler-pause-073300             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m13s                kube-proxy       
	  Normal  Starting                 12s                  kube-proxy       
	  Normal  NodeHasSufficientPID     3m8s (x7 over 3m9s)  kubelet          Node pause-073300 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    3m8s (x8 over 3m9s)  kubelet          Node pause-073300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  3m8s (x8 over 3m9s)  kubelet          Node pause-073300 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m33s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m33s                kubelet          Node pause-073300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m33s                kubelet          Node pause-073300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m33s                kubelet          Node pause-073300 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m32s                kubelet          Node pause-073300 status is now: NodeNotReady
	  Normal  NodeReady                2m31s                kubelet          Node pause-073300 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  2m31s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m22s                node-controller  Node pause-073300 event: Registered Node pause-073300 in Controller
	  Normal  Starting                 39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     38s (x7 over 38s)    kubelet          Node pause-073300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  37s (x8 over 38s)    kubelet          Node pause-073300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 38s)    kubelet          Node pause-073300 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           10s                  node-controller  Node pause-073300 event: Registered Node pause-073300 in Controller
	
	* 
	* ==> dmesg <==
	* [Mar15 20:45] WSL2: Performing memory compaction.
	[Mar15 20:47] WSL2: Performing memory compaction.
	[Mar15 20:48] WSL2: Performing memory compaction.
	[Mar15 20:49] WSL2: Performing memory compaction.
	[Mar15 20:51] WSL2: Performing memory compaction.
	[Mar15 20:52] WSL2: Performing memory compaction.
	[Mar15 20:53] WSL2: Performing memory compaction.
	[Mar15 20:54] WSL2: Performing memory compaction.
	[Mar15 20:56] WSL2: Performing memory compaction.
	[Mar15 20:57] WSL2: Performing memory compaction.
	[Mar15 20:58] WSL2: Performing memory compaction.
	[Mar15 20:59] WSL2: Performing memory compaction.
	[Mar15 21:00] WSL2: Performing memory compaction.
	[Mar15 21:01] WSL2: Performing memory compaction.
	[Mar15 21:03] WSL2: Performing memory compaction.
	[ +24.007152] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Mar15 21:04] process 'docker/tmp/qemu-check145175011/check' started with executable stack
	[ +21.555954] WSL2: Performing memory compaction.
	[Mar15 21:06] WSL2: Performing memory compaction.
	[Mar15 21:07] hrtimer: interrupt took 920300 ns
	[Mar15 21:09] WSL2: Performing memory compaction.
	[Mar15 21:11] WSL2: Performing memory compaction.
	[Mar15 21:12] WSL2: Performing memory compaction.
	[Mar15 21:13] WSL2: Performing memory compaction.
	[Mar15 21:15] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [6824568445c6] <==
	* {"level":"info","ts":"2023-03-15T21:15:44.027Z","caller":"traceutil/trace.go:171","msg":"trace[2137636385] transaction","detail":"{read_only:false; number_of_response:1; response_revision:415; }","duration":"100.9434ms","start":"2023-03-15T21:15:43.926Z","end":"2023-03-15T21:15:44.027Z","steps":["trace[2137636385] 'process raft request'  (duration: 100.5034ms)"],"step_count":1}
	{"level":"warn","ts":"2023-03-15T21:15:44.540Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.6806ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873768454336989569 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/kube-apiserver-nzsucgtdly32izejp7ytxjrkii\" mod_revision:412 > success:<request_put:<key:\"/registry/leases/kube-system/kube-apiserver-nzsucgtdly32izejp7ytxjrkii\" value_size:582 >> failure:<request_range:<key:\"/registry/leases/kube-system/kube-apiserver-nzsucgtdly32izejp7ytxjrkii\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-03-15T21:15:44.541Z","caller":"traceutil/trace.go:171","msg":"trace[493093877] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"109.7451ms","start":"2023-03-15T21:15:44.431Z","end":"2023-03-15T21:15:44.541Z","steps":["trace[493093877] 'process raft request'  (duration: 109.4963ms)"],"step_count":1}
	{"level":"info","ts":"2023-03-15T21:15:44.542Z","caller":"traceutil/trace.go:171","msg":"trace[455019656] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"198.5194ms","start":"2023-03-15T21:15:44.343Z","end":"2023-03-15T21:15:44.542Z","steps":["trace[455019656] 'process raft request'  (duration: 87.4089ms)","trace[455019656] 'compare'  (duration: 106.3186ms)"],"step_count":2}
	{"level":"info","ts":"2023-03-15T21:15:44.542Z","caller":"traceutil/trace.go:171","msg":"trace[1743432337] linearizableReadLoop","detail":"{readStateIndex:444; appliedIndex:443; }","duration":"112.8874ms","start":"2023-03-15T21:15:44.430Z","end":"2023-03-15T21:15:44.542Z","steps":["trace[1743432337] 'read index received'  (duration: 852.1µs)","trace[1743432337] 'applied index is now lower than readState.Index'  (duration: 112.0303ms)"],"step_count":2}
	{"level":"warn","ts":"2023-03-15T21:15:44.544Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"113.3137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-node-lease\" ","response":"range_response_count:1 size:363"}
	{"level":"info","ts":"2023-03-15T21:15:44.545Z","caller":"traceutil/trace.go:171","msg":"trace[83833859] range","detail":"{range_begin:/registry/namespaces/kube-node-lease; range_end:; response_count:1; response_revision:418; }","duration":"115.03ms","start":"2023-03-15T21:15:44.429Z","end":"2023-03-15T21:15:44.545Z","steps":["trace[83833859] 'agreement among raft nodes before linearized reading'  (duration: 113.2035ms)"],"step_count":1}
	{"level":"warn","ts":"2023-03-15T21:15:44.545Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"111.3129ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
	{"level":"info","ts":"2023-03-15T21:15:44.545Z","caller":"traceutil/trace.go:171","msg":"trace[1382087029] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:418; }","duration":"111.3651ms","start":"2023-03-15T21:15:44.434Z","end":"2023-03-15T21:15:44.545Z","steps":["trace[1382087029] 'agreement among raft nodes before linearized reading'  (duration: 111.2411ms)"],"step_count":1}
	{"level":"warn","ts":"2023-03-15T21:15:44.547Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"113.4412ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:118"}
	{"level":"info","ts":"2023-03-15T21:15:44.547Z","caller":"traceutil/trace.go:171","msg":"trace[1257507815] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:418; }","duration":"113.4898ms","start":"2023-03-15T21:15:44.434Z","end":"2023-03-15T21:15:44.547Z","steps":["trace[1257507815] 'agreement among raft nodes before linearized reading'  (duration: 113.3486ms)"],"step_count":1}
	{"level":"info","ts":"2023-03-15T21:15:44.956Z","caller":"traceutil/trace.go:171","msg":"trace[1166219815] linearizableReadLoop","detail":"{readStateIndex:447; appliedIndex:446; }","duration":"121.4317ms","start":"2023-03-15T21:15:44.835Z","end":"2023-03-15T21:15:44.956Z","steps":["trace[1166219815] 'read index received'  (duration: 3.7558ms)","trace[1166219815] 'applied index is now lower than readState.Index'  (duration: 117.6698ms)"],"step_count":2}
	{"level":"info","ts":"2023-03-15T21:15:44.956Z","caller":"traceutil/trace.go:171","msg":"trace[513205189] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"125.7589ms","start":"2023-03-15T21:15:44.830Z","end":"2023-03-15T21:15:44.956Z","steps":["trace[513205189] 'process raft request'  (duration: 94.9828ms)","trace[513205189] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/pods/kube-system/kube-proxy-m4md5; req_size:4522; } (duration: 27.8113ms)"],"step_count":2}
	{"level":"warn","ts":"2023-03-15T21:15:44.957Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"121.7804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" ","response":"range_response_count:1 size:442"}
	{"level":"info","ts":"2023-03-15T21:15:44.958Z","caller":"traceutil/trace.go:171","msg":"trace[1937091289] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:1; response_revision:421; }","duration":"123.6279ms","start":"2023-03-15T21:15:44.835Z","end":"2023-03-15T21:15:44.958Z","steps":["trace[1937091289] 'agreement among raft nodes before linearized reading'  (duration: 121.5636ms)"],"step_count":1}
	{"level":"warn","ts":"2023-03-15T21:15:44.965Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"129.7433ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" ","response":"range_response_count:64 size:57899"}
	{"level":"info","ts":"2023-03-15T21:15:44.965Z","caller":"traceutil/trace.go:171","msg":"trace[1243225417] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:64; response_revision:421; }","duration":"129.8213ms","start":"2023-03-15T21:15:44.835Z","end":"2023-03-15T21:15:44.965Z","steps":["trace[1243225417] 'agreement among raft nodes before linearized reading'  (duration: 123.3525ms)"],"step_count":1}
	{"level":"info","ts":"2023-03-15T21:15:46.132Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-03-15T21:15:46.132Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-073300","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
	WARNING: 2023/03/15 21:15:46 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2023/03/15 21:15:46 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2023-03-15T21:15:46.436Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f23060b075c4c089","current-leader-member-id":"f23060b075c4c089"}
	{"level":"info","ts":"2023-03-15T21:15:46.534Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2023-03-15T21:15:46.538Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2023-03-15T21:15:46.538Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-073300","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
	
	* 
	* ==> etcd [aba41f11fdc8] <==
	* {"level":"info","ts":"2023-03-15T21:16:07.246Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2023-03-15T21:16:07.246Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-03-15T21:16:07.246Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-15T21:16:07.246Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-15T21:16:07.246Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2023-03-15T21:16:07.246Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2023-03-15T21:16:07.247Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-03-15T21:16:07.247Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-03-15T21:16:07.244Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-03-15T21:16:07.326Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-03-15T21:16:07.326Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 3"}
	{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 4"}
	{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 4"}
	{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 4"}
	{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 4"}
	{"level":"info","ts":"2023-03-15T21:16:09.138Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-15T21:16:09.139Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:pause-073300 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2023-03-15T21:16:09.139Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-15T21:16:09.143Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-03-15T21:16:09.144Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-03-15T21:16:09.147Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-03-15T21:16:09.147Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.103.2:2379"}
	
	* 
	* ==> kernel <==
	*  21:16:39 up  1:24,  0 users,  load average: 11.44, 9.76, 6.44
	Linux pause-073300 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [0cb5567e32ab] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 21:15:54.852783       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 21:15:55.001054       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 21:15:55.018769       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [88f944458735] <==
	* I0315 21:16:13.321430       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0315 21:16:13.321719       1 shared_informer.go:273] Waiting for caches to sync for cluster_authentication_trust_controller
	I0315 21:16:13.322696       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0315 21:16:13.320670       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0315 21:16:13.320231       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0315 21:16:13.324414       1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister
	I0315 21:16:13.437824       1 shared_informer.go:280] Caches are synced for configmaps
	I0315 21:16:13.525354       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0315 21:16:13.623881       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0315 21:16:13.624222       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0315 21:16:13.624252       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0315 21:16:13.624258       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0315 21:16:13.624333       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0315 21:16:13.625322       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0315 21:16:13.625384       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0315 21:16:13.625410       1 cache.go:39] Caches are synced for autoregister controller
	I0315 21:16:13.630897       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0315 21:16:14.357698       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0315 21:16:16.572417       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0315 21:16:16.602561       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0315 21:16:16.951884       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0315 21:16:17.136478       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0315 21:16:17.246459       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0315 21:16:28.244694       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0315 21:16:28.342519       1 controller.go:615] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [1f51fce69c22] <==
	* I0315 21:15:33.346996       1 serving.go:348] Generated self-signed cert in-memory
	I0315 21:15:39.060876       1 controllermanager.go:182] Version: v1.26.2
	I0315 21:15:39.061047       1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 21:15:39.072013       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0315 21:15:39.072120       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0315 21:15:39.072625       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0315 21:15:39.072677       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	* 
	* ==> kube-controller-manager [e6bb3d9a35ff] <==
	* I0315 21:16:28.124587       1 shared_informer.go:280] Caches are synced for cidrallocator
	I0315 21:16:28.124592       1 shared_informer.go:280] Caches are synced for crt configmap
	I0315 21:16:28.124598       1 shared_informer.go:280] Caches are synced for endpoint
	I0315 21:16:28.124661       1 shared_informer.go:280] Caches are synced for HPA
	I0315 21:16:28.124898       1 shared_informer.go:280] Caches are synced for GC
	I0315 21:16:28.124186       1 shared_informer.go:280] Caches are synced for taint
	I0315 21:16:28.125247       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0315 21:16:28.125313       1 taint_manager.go:211] "Sending events to api server"
	I0315 21:16:28.125358       1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone: 
	W0315 21:16:28.125464       1 node_lifecycle_controller.go:1053] Missing timestamp for Node pause-073300. Assuming now as a timestamp.
	I0315 21:16:28.125524       1 node_lifecycle_controller.go:1254] Controller detected that zone  is now in state Normal.
	I0315 21:16:28.126198       1 event.go:294] "Event occurred" object="pause-073300" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-073300 event: Registered Node pause-073300 in Controller"
	I0315 21:16:28.126964       1 shared_informer.go:280] Caches are synced for stateful set
	I0315 21:16:28.227084       1 shared_informer.go:280] Caches are synced for namespace
	I0315 21:16:28.227137       1 shared_informer.go:280] Caches are synced for disruption
	I0315 21:16:28.227298       1 shared_informer.go:280] Caches are synced for deployment
	I0315 21:16:28.227547       1 shared_informer.go:280] Caches are synced for ReplicaSet
	I0315 21:16:28.227631       1 shared_informer.go:280] Caches are synced for service account
	I0315 21:16:28.229520       1 shared_informer.go:273] Waiting for caches to sync for garbage collector
	I0315 21:16:28.233560       1 shared_informer.go:280] Caches are synced for resource quota
	I0315 21:16:28.236781       1 shared_informer.go:280] Caches are synced for resource quota
	I0315 21:16:28.529112       1 event.go:294] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
	I0315 21:16:28.534472       1 shared_informer.go:280] Caches are synced for garbage collector
	I0315 21:16:28.562852       1 shared_informer.go:280] Caches are synced for garbage collector
	I0315 21:16:28.562973       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [b7b4669a56d5] <==
	* I0315 21:16:25.942265       1 node.go:163] Successfully retrieved node IP: 192.168.103.2
	I0315 21:16:25.944192       1 server_others.go:109] "Detected node IP" address="192.168.103.2"
	I0315 21:16:25.944360       1 server_others.go:535] "Using iptables proxy"
	I0315 21:16:26.134212       1 server_others.go:176] "Using iptables Proxier"
	I0315 21:16:26.134360       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0315 21:16:26.134376       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0315 21:16:26.134395       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0315 21:16:26.134427       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 21:16:26.135408       1 server.go:655] "Version info" version="v1.26.2"
	I0315 21:16:26.135540       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 21:16:26.136322       1 config.go:317] "Starting service config controller"
	I0315 21:16:26.136477       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0315 21:16:26.136504       1 config.go:226] "Starting endpoint slice config controller"
	I0315 21:16:26.136526       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0315 21:16:26.136357       1 config.go:444] "Starting node config controller"
	I0315 21:16:26.137498       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0315 21:16:26.236790       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0315 21:16:26.238214       1 shared_informer.go:280] Caches are synced for node config
	I0315 21:16:26.238275       1 shared_informer.go:280] Caches are synced for service config
	
	* 
	* ==> kube-proxy [c2ad60cad36d] <==
	* E0315 21:15:29.627155       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-073300": dial tcp 192.168.103.2:8443: connect: connection refused
	E0315 21:15:30.826046       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-073300": dial tcp 192.168.103.2:8443: connect: connection refused
	E0315 21:15:43.235847       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-073300": net/http: TLS handshake timeout
	
	* 
	* ==> kube-scheduler [571d48566917] <==
	* I0315 21:16:07.677853       1 serving.go:348] Generated self-signed cert in-memory
	I0315 21:16:13.656832       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.2"
	I0315 21:16:13.656978       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 21:16:13.756221       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0315 21:16:13.756343       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0315 21:16:13.758353       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0315 21:16:13.758370       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0315 21:16:13.759778       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0315 21:16:13.759904       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0315 21:16:13.757625       1 shared_informer.go:273] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0315 21:16:13.758377       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0315 21:16:13.924166       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0315 21:16:13.924382       1 shared_informer.go:280] Caches are synced for RequestHeaderAuthRequestController
	I0315 21:16:13.924585       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [95e8431f8447] <==
	* I0315 21:15:34.052612       1 serving.go:348] Generated self-signed cert in-memory
	W0315 21:15:44.136305       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0315 21:15:44.140386       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 21:15:44.225673       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0315 21:15:44.225720       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0315 21:15:44.445561       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.2"
	I0315 21:15:44.445741       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 21:15:44.453477       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0315 21:15:44.455841       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0315 21:15:44.456010       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0315 21:15:44.456059       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0315 21:15:44.925804       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0315 21:15:46.348879       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0315 21:15:46.350010       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0315 21:15:46.352703       1 scheduling_queue.go:1065] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	I0315 21:15:46.355076       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0315 21:15:46.355314       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2023-03-15 21:13:03 UTC, end at Wed 2023-03-15 21:16:39 UTC. --
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.766354    7548 kubelet_node_status.go:73] "Successfully registered node" node="pause-073300"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.826254    7548 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.828960    7548 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.830844    7548 apiserver.go:52] "Watching apiserver"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.846713    7548 topology_manager.go:210] "Topology Admit Handler"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.846988    7548 topology_manager.go:210] "Topology Admit Handler"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.925554    7548 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.944245    7548 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/428ae579-2b68-4526-a2b0-d8bb5922870f-kube-proxy\") pod \"kube-proxy-m4md5\" (UID: \"428ae579-2b68-4526-a2b0-d8bb5922870f\") " pod="kube-system/kube-proxy-m4md5"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.944547    7548 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/428ae579-2b68-4526-a2b0-d8bb5922870f-xtables-lock\") pod \"kube-proxy-m4md5\" (UID: \"428ae579-2b68-4526-a2b0-d8bb5922870f\") " pod="kube-system/kube-proxy-m4md5"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.944610    7548 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/428ae579-2b68-4526-a2b0-d8bb5922870f-lib-modules\") pod \"kube-proxy-m4md5\" (UID: \"428ae579-2b68-4526-a2b0-d8bb5922870f\") " pod="kube-system/kube-proxy-m4md5"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.944669    7548 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7vbb\" (UniqueName: \"kubernetes.io/projected/428ae579-2b68-4526-a2b0-d8bb5922870f-kube-api-access-b7vbb\") pod \"kube-proxy-m4md5\" (UID: \"428ae579-2b68-4526-a2b0-d8bb5922870f\") " pod="kube-system/kube-proxy-m4md5"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.945094    7548 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc-config-volume\") pod \"coredns-787d4945fb-2q246\" (UID: \"13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc\") " pod="kube-system/coredns-787d4945fb-2q246"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.945520    7548 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbnj9\" (UniqueName: \"kubernetes.io/projected/13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc-kube-api-access-mbnj9\") pod \"coredns-787d4945fb-2q246\" (UID: \"13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc\") " pod="kube-system/coredns-787d4945fb-2q246"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.945563    7548 reconciler.go:41] "Reconciler: start to sync state"
	Mar 15 21:16:14 pause-073300 kubelet[7548]: I0315 21:16:14.149192    7548 scope.go:115] "RemoveContainer" containerID="c2ad60cad36db8cde30e0a93c9255fa18e5df353a41dd6259afeb2043222ac62"
	Mar 15 21:16:14 pause-073300 kubelet[7548]: I0315 21:16:14.150324    7548 scope.go:115] "RemoveContainer" containerID="e3043962e5ef540d703084ce9ddfc5f027eaab5ffceeeadfdff71e94f0eee0ce"
	Mar 15 21:16:14 pause-073300 kubelet[7548]: E0315 21:16:14.154149    7548 kuberuntime_manager.go:872] container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.9.3,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mbnj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropaga
tion:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},S
tdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod coredns-787d4945fb-2q246_kube-system(13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
	Mar 15 21:16:14 pause-073300 kubelet[7548]: E0315 21:16:14.154342    7548 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-787d4945fb-2q246" podUID=13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc
	Mar 15 21:16:14 pause-073300 kubelet[7548]: E0315 21:16:14.154347    7548 kuberuntime_manager.go:872] container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.26.2,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},Vol
umeMount{Name:kube-api-access-b7vbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-m4md5_kube-system(428ae579-2b68-4526-a2b0-d8bb5922870f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
	Mar 15 21:16:14 pause-073300 kubelet[7548]: E0315 21:16:14.155707    7548 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-m4md5" podUID=428ae579-2b68-4526-a2b0-d8bb5922870f
	Mar 15 21:16:14 pause-073300 kubelet[7548]: I0315 21:16:14.763783    7548 scope.go:115] "RemoveContainer" containerID="e3043962e5ef540d703084ce9ddfc5f027eaab5ffceeeadfdff71e94f0eee0ce"
	Mar 15 21:16:14 pause-073300 kubelet[7548]: E0315 21:16:14.768377    7548 kuberuntime_manager.go:872] container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.9.3,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mbnj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropaga
tion:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},S
tdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod coredns-787d4945fb-2q246_kube-system(13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
	Mar 15 21:16:14 pause-073300 kubelet[7548]: E0315 21:16:14.768684    7548 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-787d4945fb-2q246" podUID=13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc
	Mar 15 21:16:25 pause-073300 kubelet[7548]: I0315 21:16:25.248648    7548 scope.go:115] "RemoveContainer" containerID="c2ad60cad36db8cde30e0a93c9255fa18e5df353a41dd6259afeb2043222ac62"
	Mar 15 21:16:27 pause-073300 kubelet[7548]: I0315 21:16:27.246680    7548 scope.go:115] "RemoveContainer" containerID="e3043962e5ef540d703084ce9ddfc5f027eaab5ffceeeadfdff71e94f0eee0ce"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-073300 -n pause-073300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-073300 -n pause-073300: (2.1781542s)
helpers_test.go:261: (dbg) Run:  kubectl --context pause-073300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-073300
helpers_test.go:235: (dbg) docker inspect pause-073300:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af",
	        "Created": "2023-03-15T21:12:57.6447279Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 235036,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-15T21:13:02.5343301Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c2228ee73b919fe6986a8848f936a81a268f0e56f65fc402964f596a1336d16b",
	        "ResolvConfPath": "/var/lib/docker/containers/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/hostname",
	        "HostsPath": "/var/lib/docker/containers/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/hosts",
	        "LogPath": "/var/lib/docker/containers/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af-json.log",
	        "Name": "/pause-073300",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-073300:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-073300",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/daf89d89f16ecbd4935a7a509e1ebcf567d4c7992b1f3939dc1333e423f6287b-init/diff:/var/lib/docker/overlay2/dd4a105805e89f3781ba34ad53d0a86096f0b864f9eade98210c90b3db11e614/diff:/var/lib/docker/overlay2/85f05c8966ab20f24eea0cadf9b702a2755c1a700aee4fcacd3754b8fa7f8a91/diff:/var/lib/docker/overlay2/b2c60f67ad52427067a519010db687573f6b5b01526e9e9493d88bbb3dcaf069/diff:/var/lib/docker/overlay2/ca870ef465e163b19b7e0ef24b89c201cc7cfe12753a6ca6a515827067e4fc98/diff:/var/lib/docker/overlay2/f55801eccf5ae4ff6206eaaaca361e1d9bfadc5759172bb8072e835b0002419b/diff:/var/lib/docker/overlay2/3da247e6db7b0c502d6067a49cfb704f596cd5fe9a3a874f6888ae9cc2373233/diff:/var/lib/docker/overlay2/f0dcb6d169a751860b7c097c666afe3d8fba3aac20d90e95b7f85913b7d1fda7/diff:/var/lib/docker/overlay2/a0c906b3378b625d84a7a2d043cc982545599c488b72767e2b4822211ddee871/diff:/var/lib/docker/overlay2/1380f7e23737bb69bab3e1c3b37fff4a603a1096ba1e984f2808fdb9fc5664b7/diff:/var/lib/docker/overlay2/f09380
dffb1afe5e97599b999b6d05a1d0b97490fc3afb897018955e3589ddf0/diff:/var/lib/docker/overlay2/12504a4aab3b43a1624555c565265eb2a252f3cc64b5942527ead795f1b46742/diff:/var/lib/docker/overlay2/2f17a40545e098dc56e6667d78dfde761f9ae57ff4c2dcab77a6135abc29f050/diff:/var/lib/docker/overlay2/378841db26151d8a66f60032a9366d4572aeb0fd0db1c1af9429abf5d7b6ab82/diff:/var/lib/docker/overlay2/14ee7241acf63b7e56e700bccdbcc29bd6530ebd357799238641498ccb978bc1/diff:/var/lib/docker/overlay2/0e384b8276413ac21818038eacaf3da54a8ac43c6ccef737b2c4e70e568fe287/diff:/var/lib/docker/overlay2/66beff05ea52aebfaea737c44ff3da16f742e7e2577ccea2c1fe954085a1e7f4/diff:/var/lib/docker/overlay2/fe7b0a2c7d3f1889e322a156881a5066e5e784dc1888fbf172b4beada499c14a/diff:/var/lib/docker/overlay2/bf3118300571672a5d3b839bbbbaa42516c05f16305f5b944d88d38687857207/diff:/var/lib/docker/overlay2/d1326cf983418efce550556b370f71d9b4d9e6671a9267ea6433967dcafff129/diff:/var/lib/docker/overlay2/cc4d1369146bbaac53f23e5cb8e072c195a8c109396c1f305d9a90dbcb491d62/diff:/var/lib/d
ocker/overlay2/20a6a00f4e15b51632a8a26911faf3243318c3e7bd9266fe9c926ca6070526a8/diff:/var/lib/docker/overlay2/6a6bfa0be9e2c1a0aa9fa555897c7f62f7c23b782a2117560731f10b833692a0/diff:/var/lib/docker/overlay2/0d9ed53179f81c8d2e276195863f6ac1ba99be69a7217caa97c19fe1121b0d38/diff:/var/lib/docker/overlay2/f9e70916967de3d00f48ca66d15ec3af34bd3980334b7ecb8950be0a5aee2e5e/diff:/var/lib/docker/overlay2/8a3ebe53f0b355704a58efda53f1dcf8ae0099f0a7947c748e7c447044baed05/diff:/var/lib/docker/overlay2/f6841f5c7deb52ba587f1365fd0bc48fe4334bd9678f4846740d9e4f3df386c4/diff:/var/lib/docker/overlay2/7729eb6c4bb6c79eae923e1946b180dcdb33aa85c259a8a21b46994e681a329f/diff:/var/lib/docker/overlay2/86ccbe980393e3c2dea4faf1f5b45fa86ac8f47190cf4fb3ebb23d5fd6687d44/diff:/var/lib/docker/overlay2/48b28921897a52ef79e37091b3d3df88fa4e01604e3a63d7e3dbbd72e551797c/diff:/var/lib/docker/overlay2/b9f9c70e4945260452936930e508cb1e7d619927da4230c7b792e5908a93ec46/diff:/var/lib/docker/overlay2/39f84637efc722da57b6de997d757e8709af3d48f8cba3da8848d3674aa
7ba4d/diff:/var/lib/docker/overlay2/9d81ba80e5128eb395bcffc7b56889c3d18172c222e637671a4b3c12c0a72afd/diff:/var/lib/docker/overlay2/03583facbdd50e79e467eb534dfcbe3d5e47aef4b25195138b3c0134ebd7f07e/diff:/var/lib/docker/overlay2/38e991cef8fb39c883da64e57775232fd1df5a4c67f32565e747b7363f336632/diff:/var/lib/docker/overlay2/0e0ebf6f489a93585842ec4fef7d044da67fd8a9504f91fe03cc03c6928134b8/diff:/var/lib/docker/overlay2/dedec87bbba9e6a1a68a159c167cac4c10a25918fa3d00630d6570db2ca290eb/diff:/var/lib/docker/overlay2/dc09130400d9f44a28862a6484b44433985893e9a8f49df62c38c0bd6b5e4e2c/diff:/var/lib/docker/overlay2/f00d229f6d9f2960571b2e1c365f30bd680b686c0d4569b5190c072a626c6811/diff:/var/lib/docker/overlay2/1a9993f098965bbd60b6e43b5998e4fcae02f81d65cc863bd8f6e29f7e2b8426/diff:/var/lib/docker/overlay2/500f950cf1835311103c129d3c1487e8e6b917ad928788ee14527cd8342c544f/diff:/var/lib/docker/overlay2/018feb310d5aa53cd6175c82f8ca56d22b3c1ad26ae5cfda5f6e3b56ca3919e6/diff:/var/lib/docker/overlay2/f84198610374e88e1ba6917bf70c8d9cea6ede
68b5fb4852c7eebcb536a12a83/diff",
	                "MergedDir": "/var/lib/docker/overlay2/daf89d89f16ecbd4935a7a509e1ebcf567d4c7992b1f3939dc1333e423f6287b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/daf89d89f16ecbd4935a7a509e1ebcf567d4c7992b1f3939dc1333e423f6287b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/daf89d89f16ecbd4935a7a509e1ebcf567d4c7992b1f3939dc1333e423f6287b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-073300",
	                "Source": "/var/lib/docker/volumes/pause-073300/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-073300",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-073300",
	                "name.minikube.sigs.k8s.io": "pause-073300",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c465f6b5b8ea2cbabcd582f953a2ee6755ba6c0b6db6fbc3b931a291aafae975",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65160"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65161"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65163"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65164"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65165"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c465f6b5b8ea",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-073300": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8be68eee5af2",
	                        "pause-073300"
	                    ],
	                    "NetworkID": "e97288cdb8ed8d3c843be70e49117f727e8c88772310c60f193237b2f3d2167f",
	                    "EndpointID": "7dff20190b061cfe2a0b46f43c2f9a085fd94900413646e6b074cab27b5ac50e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-073300 -n pause-073300
E0315 21:16:45.025974    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-073300 -n pause-073300: (2.5347633s)
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-073300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-073300 logs -n 25: (4.3846717s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                         Args                         |         Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p cilium-899600 sudo                                | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | systemctl cat cri-docker                             |                          |                   |         |                     |                     |
	|         | --no-pager                                           |                          |                   |         |                     |                     |
	| ssh     | -p cilium-899600 sudo cat                            | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                          |                   |         |                     |                     |
	| ssh     | -p cilium-899600 sudo cat                            | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                          |                   |         |                     |                     |
	| ssh     | -p cilium-899600 sudo                                | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | cri-dockerd --version                                |                          |                   |         |                     |                     |
	| ssh     | -p cilium-899600 sudo                                | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | systemctl status containerd                          |                          |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                          |                   |         |                     |                     |
	| ssh     | -p cilium-899600 sudo                                | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | systemctl cat containerd                             |                          |                   |         |                     |                     |
	|         | --no-pager                                           |                          |                   |         |                     |                     |
	| ssh     | -p cilium-899600 sudo cat                            | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                          |                   |         |                     |                     |
	| ssh     | -p cilium-899600 sudo cat                            | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | /etc/containerd/config.toml                          |                          |                   |         |                     |                     |
	| ssh     | -p cilium-899600 sudo                                | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | containerd config dump                               |                          |                   |         |                     |                     |
	| ssh     | -p cilium-899600 sudo                                | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | systemctl status crio --all                          |                          |                   |         |                     |                     |
	|         | --full --no-pager                                    |                          |                   |         |                     |                     |
	| ssh     | -p cilium-899600 sudo                                | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                          |                   |         |                     |                     |
	| ssh     | -p cilium-899600 sudo find                           | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                          |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                          |                   |         |                     |                     |
	| ssh     | -p cilium-899600 sudo crio                           | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | config                                               |                          |                   |         |                     |                     |
	| delete  | -p cilium-899600                                     | cilium-899600            | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | 15 Mar 23 21:13 UTC |
	| start   | -p force-systemd-env-387800                          | force-systemd-env-387800 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | 15 Mar 23 21:15 UTC |
	|         | --memory=2048                                        |                          |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                               |                          |                   |         |                     |                     |
	|         | --driver=docker                                      |                          |                   |         |                     |                     |
	| ssh     | cert-options-298900 ssh                              | cert-options-298900      | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | 15 Mar 23 21:13 UTC |
	|         | openssl x509 -text -noout -in                        |                          |                   |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                |                          |                   |         |                     |                     |
	| ssh     | -p cert-options-298900 -- sudo                       | cert-options-298900      | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | 15 Mar 23 21:13 UTC |
	|         | cat /etc/kubernetes/admin.conf                       |                          |                   |         |                     |                     |
	| delete  | -p cert-options-298900                               | cert-options-298900      | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | 15 Mar 23 21:13 UTC |
	| delete  | -p cert-expiration-023900                            | cert-expiration-023900   | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | 15 Mar 23 21:13 UTC |
	| start   | -p old-k8s-version-103800                            | old-k8s-version-103800   | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | --memory=2200                                        |                          |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                          |                   |         |                     |                     |
	|         | --kvm-network=default                                |                          |                   |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                          |                   |         |                     |                     |
	|         | --disable-driver-mounts                              |                          |                   |         |                     |                     |
	|         | --keep-context=false                                 |                          |                   |         |                     |                     |
	|         | --driver=docker                                      |                          |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                          |                   |         |                     |                     |
	| start   | -p no-preload-470000                                 | no-preload-470000        | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC |                     |
	|         | --memory=2200                                        |                          |                   |         |                     |                     |
	|         | --alsologtostderr                                    |                          |                   |         |                     |                     |
	|         | --wait=true --preload=false                          |                          |                   |         |                     |                     |
	|         | --driver=docker                                      |                          |                   |         |                     |                     |
	|         | --kubernetes-version=v1.26.2                         |                          |                   |         |                     |                     |
	| start   | -p pause-073300                                      | pause-073300             | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:14 UTC | 15 Mar 23 21:16 UTC |
	|         | --alsologtostderr -v=1                               |                          |                   |         |                     |                     |
	|         | --driver=docker                                      |                          |                   |         |                     |                     |
	| ssh     | force-systemd-env-387800                             | force-systemd-env-387800 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:15 UTC | 15 Mar 23 21:15 UTC |
	|         | ssh docker info --format                             |                          |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                                    |                          |                   |         |                     |                     |
	| delete  | -p force-systemd-env-387800                          | force-systemd-env-387800 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:15 UTC | 15 Mar 23 21:15 UTC |
	| start   | -p embed-certs-348900                                | embed-certs-348900       | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:15 UTC |                     |
	|         | --memory=2200                                        |                          |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                          |                   |         |                     |                     |
	|         | --embed-certs --driver=docker                        |                          |                   |         |                     |                     |
	|         | --kubernetes-version=v1.26.2                         |                          |                   |         |                     |                     |
	|---------|------------------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/15 21:15:28
	Running on machine: minikube1
	Binary: Built with gc go1.20.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 21:15:28.142992   11164 out.go:296] Setting OutFile to fd 1840 ...
	I0315 21:15:28.223401   11164 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0315 21:15:28.223401   11164 out.go:309] Setting ErrFile to fd 1952...
	I0315 21:15:28.223401   11164 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0315 21:15:28.262334   11164 out.go:303] Setting JSON to false
	I0315 21:15:28.267297   11164 start.go:125] hostinfo: {"hostname":"minikube1","uptime":24330,"bootTime":1678890597,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2728 Build 19045.2728","kernelVersion":"10.0.19045.2728 Build 19045.2728","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0315 21:15:28.269446   11164 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0315 21:15:28.271110   11164 out.go:177] * [embed-certs-348900] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2728 Build 19045.2728
	I0315 21:15:28.276466   11164 notify.go:220] Checking for updates...
	I0315 21:15:28.279987   11164 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0315 21:15:28.284307   11164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 21:15:28.287394   11164 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0315 21:15:28.289437   11164 out.go:177]   - MINIKUBE_LOCATION=16056
	I0315 21:15:28.293408   11164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 21:15:27.107652    3304 kubeadm.go:322] [apiclient] All control plane components are healthy after 22.564526 seconds
	I0315 21:15:27.107905    3304 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 21:15:27.174450    3304 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 21:15:27.850318    3304 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 21:15:27.850318    3304 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-103800 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0315 21:15:28.451439    3304 kubeadm.go:322] [bootstrap-token] Using token: 1vsykl.s1ca43i7aq3le3xp
	I0315 21:15:28.454827    3304 out.go:204]   - Configuring RBAC rules ...
	I0315 21:15:28.455102    3304 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 21:15:28.540595    3304 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 21:15:28.708614    3304 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 21:15:28.750604    3304 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 21:15:28.768374    3304 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 21:15:28.296206   11164 config.go:182] Loaded profile config "no-preload-470000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0315 21:15:28.296901   11164 config.go:182] Loaded profile config "old-k8s-version-103800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0315 21:15:28.296901   11164 config.go:182] Loaded profile config "pause-073300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0315 21:15:28.297434   11164 driver.go:365] Setting default libvirt URI to qemu:///system
	I0315 21:15:28.716358   11164 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0315 21:15:28.733128   11164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 21:15:30.097371   11164 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.3641858s)
	I0315 21:15:30.098315   11164 info.go:266] docker info: {ID:5XVN:YLWI:D57U:VRY6:Z2T2:XT44:UTQY:SUTG:X4EL:3KBQ:R56A:SLJU Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:88 OomKillDisable:true NGoroutines:71 SystemTime:2023-03-15 21:15:28.9739466 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path:C:\Program Files\Docker\cli-plu
gins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0315 21:15:30.101949   11164 out.go:177] * Using the docker driver based on user configuration
	I0315 21:15:25.993789    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:26.001803    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0315 21:15:26.031907    1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:26.498885    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:26.520413    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0315 21:15:26.747568    1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:26.998577    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:27.005491    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0315 21:15:27.038573    1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:27.494449    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:27.510680    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0315 21:15:27.648998    1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:28.001209    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:28.016866    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0315 21:15:28.252092    1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:28.497926    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:28.519187    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0315 21:15:28.938518    1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:29.005873    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:29.022195    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0315 21:15:29.437505    1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:29.498878    1332 api_server.go:165] Checking apiserver status ...
	I0315 21:15:29.509169    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:15:29.790027    1332 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6279/cgroup
	I0315 21:15:30.138061    1332 api_server.go:181] apiserver freezer: "20:freezer:/docker/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/kubepods/burstable/podd4d4a3bea62ddb6580910d9ea0aba8c6/0cb5567e32abb23418b668dfb851f2300e7fd6400791daeca39d46d8cf78cb16"
	I0315 21:15:30.167896    1332 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/kubepods/burstable/podd4d4a3bea62ddb6580910d9ea0aba8c6/0cb5567e32abb23418b668dfb851f2300e7fd6400791daeca39d46d8cf78cb16/freezer.state
	I0315 21:15:30.342651    1332 api_server.go:203] freezer state: "THAWED"
	I0315 21:15:30.342651    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:15:30.356716    1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": EOF
	I0315 21:15:30.356862    1332 retry.go:31] will retry after 297.564807ms: state is "Stopped"
	I0315 21:15:28.433538    4576 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.6-0 | docker load": (19.494237s)
	I0315 21:15:28.433538    4576 cache_images.go:315] Transferred and loaded C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.6-0 from cache
	I0315 21:15:28.433538    4576 cache_images.go:123] Successfully loaded all cached images
	I0315 21:15:28.434115    4576 cache_images.go:92] LoadImages completed in 1m0.6105675s
	I0315 21:15:28.453600    4576 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0315 21:15:28.577481    4576 cni.go:84] Creating CNI manager for ""
	I0315 21:15:28.577553    4576 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0315 21:15:28.577553    4576 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0315 21:15:28.577617    4576 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-470000 NodeName:no-preload-470000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0315 21:15:28.577869    4576 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "no-preload-470000"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 21:15:28.577869    4576 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=no-preload-470000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.2 ClusterName:no-preload-470000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0315 21:15:28.591514    4576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
	I0315 21:15:28.640201    4576 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.26.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.26.2': No such file or directory
	
	Initiating transfer...
	I0315 21:15:28.658006    4576 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.26.2
	I0315 21:15:28.718165    4576 download.go:107] Downloading: https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.26.2/kubectl
	I0315 21:15:28.718374    4576 download.go:107] Downloading: https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.26.2/kubeadm
	I0315 21:15:28.718374    4576 download.go:107] Downloading: https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.26.2/kubelet
	I0315 21:15:30.110051    4576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.2/kubeadm
	I0315 21:15:30.131361    4576 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.26.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.26.2/kubeadm': No such file or directory
	I0315 21:15:30.131361    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.26.2/kubeadm --> /var/lib/minikube/binaries/v1.26.2/kubeadm (46768128 bytes)
	I0315 21:15:30.168761    4576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.2/kubectl
	I0315 21:15:30.671927    4576 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.26.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.26.2/kubectl': No such file or directory
	I0315 21:15:30.672203    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.26.2/kubectl --> /var/lib/minikube/binaries/v1.26.2/kubectl (48029696 bytes)
	I0315 21:15:30.105668   11164 start.go:296] selected driver: docker
	I0315 21:15:30.105668   11164 start.go:857] validating driver "docker" against <nil>
	I0315 21:15:30.105668   11164 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 21:15:30.254283   11164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 21:15:31.493680   11164 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.2393516s)
	I0315 21:15:31.494207   11164 info.go:266] docker info: {ID:5XVN:YLWI:D57U:VRY6:Z2T2:XT44:UTQY:SUTG:X4EL:3KBQ:R56A:SLJU Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:88 OomKillDisable:true NGoroutines:71 SystemTime:2023-03-15 21:15:30.5680929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path:C:\Program Files\Docker\cli-plu
gins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0315 21:15:31.494635   11164 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0315 21:15:31.496393   11164 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 21:15:31.499064   11164 out.go:177] * Using Docker Desktop driver with root privileges
	I0315 21:15:31.501160   11164 cni.go:84] Creating CNI manager for ""
	I0315 21:15:31.501160   11164 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0315 21:15:31.501160   11164 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0315 21:15:31.501160   11164 start_flags.go:319] config:
	{Name:embed-certs-348900 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:embed-certs-348900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0315 21:15:31.504086   11164 out.go:177] * Starting control plane node embed-certs-348900 in cluster embed-certs-348900
	I0315 21:15:31.506766   11164 cache.go:120] Beginning downloading kic base image for docker with docker
	I0315 21:15:31.510102   11164 out.go:177] * Pulling base image ...
	I0315 21:15:31.512871   11164 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0315 21:15:31.512871   11164 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f in local docker daemon
	I0315 21:15:31.513118   11164 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4
	I0315 21:15:31.513179   11164 cache.go:57] Caching tarball of preloaded images
	I0315 21:15:31.513395   11164 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0315 21:15:31.513395   11164 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.2 on docker
	I0315 21:15:31.514113   11164 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\config.json ...
	I0315 21:15:31.514113   11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\config.json: {Name:mk3060d08febbde2429fe9a2baf8bbeb029a2640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:15:31.875381   11164 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f in local docker daemon, skipping pull
	I0315 21:15:31.875429   11164 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f exists in daemon, skipping load
	I0315 21:15:31.875429   11164 cache.go:193] Successfully downloaded all kic artifacts
	I0315 21:15:31.875429   11164 start.go:364] acquiring machines lock for embed-certs-348900: {Name:mk2351699223ac71a23a94063928109d9d9f576a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 21:15:31.875429   11164 start.go:368] acquired machines lock for "embed-certs-348900" in 0s
	I0315 21:15:31.876003   11164 start.go:93] Provisioning new machine with config: &{Name:embed-certs-348900 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:embed-certs-348900 Namespace:default APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0315 21:15:31.876319   11164 start.go:125] createHost starting for "" (driver="docker")
	I0315 21:15:31.880060   11164 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0315 21:15:31.880999   11164 start.go:159] libmachine.API.Create for "embed-certs-348900" (driver="docker")
	I0315 21:15:31.881063   11164 client.go:168] LocalClient.Create starting
	I0315 21:15:31.881279   11164 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0315 21:15:31.881815   11164 main.go:141] libmachine: Decoding PEM data...
	I0315 21:15:31.881932   11164 main.go:141] libmachine: Parsing certificate...
	I0315 21:15:31.881975   11164 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0315 21:15:31.881975   11164 main.go:141] libmachine: Decoding PEM data...
	I0315 21:15:31.881975   11164 main.go:141] libmachine: Parsing certificate...
	I0315 21:15:31.896077   11164 cli_runner.go:164] Run: docker network inspect embed-certs-348900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0315 21:15:32.230585   11164 cli_runner.go:211] docker network inspect embed-certs-348900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0315 21:15:32.246557   11164 network_create.go:281] running [docker network inspect embed-certs-348900] to gather additional debugging logs...
	I0315 21:15:32.246658   11164 cli_runner.go:164] Run: docker network inspect embed-certs-348900
	W0315 21:15:32.585407   11164 cli_runner.go:211] docker network inspect embed-certs-348900 returned with exit code 1
	I0315 21:15:32.585485   11164 network_create.go:284] error running [docker network inspect embed-certs-348900]: docker network inspect embed-certs-348900: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-348900
	I0315 21:15:32.585531   11164 network_create.go:286] output of [docker network inspect embed-certs-348900]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-348900
	
	** /stderr **
	I0315 21:15:32.596667   11164 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0315 21:15:32.951201   11164 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0315 21:15:32.983071   11164 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e77440}
	I0315 21:15:32.983153   11164 network_create.go:123] attempt to create docker network embed-certs-348900 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0315 21:15:32.994000   11164 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-348900 embed-certs-348900
	I0315 21:15:29.902489    3304 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0315 21:15:30.410425    3304 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0315 21:15:30.439154    3304 kubeadm.go:322] 
	I0315 21:15:30.439418    3304 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0315 21:15:30.439418    3304 kubeadm.go:322] 
	I0315 21:15:30.440591    3304 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0315 21:15:30.440591    3304 kubeadm.go:322] 
	I0315 21:15:30.440591    3304 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0315 21:15:30.440591    3304 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 21:15:30.440591    3304 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 21:15:30.441146    3304 kubeadm.go:322] 
	I0315 21:15:30.441302    3304 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0315 21:15:30.441302    3304 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 21:15:30.441302    3304 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 21:15:30.441302    3304 kubeadm.go:322] 
	I0315 21:15:30.442077    3304 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0315 21:15:30.442368    3304 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0315 21:15:30.442368    3304 kubeadm.go:322] 
	I0315 21:15:30.442768    3304 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 1vsykl.s1ca43i7aq3le3xp \
	I0315 21:15:30.442976    3304 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:bbf210a1ce3ae6ed86699fbddc86294be9a5c7abc143d537001f0a224592f716 \
	I0315 21:15:30.442976    3304 kubeadm.go:322]     --control-plane 	  
	I0315 21:15:30.442976    3304 kubeadm.go:322] 
	I0315 21:15:30.442976    3304 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0315 21:15:30.442976    3304 kubeadm.go:322] 
	I0315 21:15:30.442976    3304 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1vsykl.s1ca43i7aq3le3xp \
	I0315 21:15:30.442976    3304 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:bbf210a1ce3ae6ed86699fbddc86294be9a5c7abc143d537001f0a224592f716 
	I0315 21:15:30.449019    3304 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0315 21:15:30.449255    3304 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0315 21:15:30.449632    3304 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0315 21:15:30.449944    3304 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 21:15:30.449944    3304 cni.go:84] Creating CNI manager for ""
	I0315 21:15:30.449944    3304 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0315 21:15:30.449944    3304 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 21:15:30.475844    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:30.480125    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=11fd2e5d7d4b8360c6d8a8b2c2e61a071aa8631e minikube.k8s.io/name=old-k8s-version-103800 minikube.k8s.io/updated_at=2023_03_15T21_15_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:30.550685    3304 ops.go:34] apiserver oom_adj: -16
	I0315 21:15:30.665183    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:15:30.674974    1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": EOF
	I0315 21:15:30.675152    1332 retry.go:31] will retry after 319.696256ms: state is "Stopped"
	I0315 21:15:31.004595    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:15:31.271800    4576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 21:15:32.105850    4576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.2/kubelet
	I0315 21:15:32.876011    4576 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.26.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.26.2/kubelet': No such file or directory
	I0315 21:15:32.876276    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.26.2/kubelet --> /var/lib/minikube/binaries/v1.26.2/kubelet (121268472 bytes)
	W0315 21:15:33.333982   11164 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-348900 embed-certs-348900 returned with exit code 1
	W0315 21:15:33.334081   11164 network_create.go:148] failed to create docker network embed-certs-348900 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-348900 embed-certs-348900: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0315 21:15:33.334145   11164 network_create.go:115] failed to create docker network embed-certs-348900 192.168.58.0/24, will retry: subnet is taken
	I0315 21:15:33.379254   11164 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0315 21:15:33.406969   11164 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e10420}
	I0315 21:15:33.406969   11164 network_create.go:123] attempt to create docker network embed-certs-348900 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0315 21:15:33.416637   11164 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-348900 embed-certs-348900
	I0315 21:15:33.931710   11164 network_create.go:107] docker network embed-certs-348900 192.168.67.0/24 created
	I0315 21:15:33.931710   11164 kic.go:117] calculated static IP "192.168.67.2" for the "embed-certs-348900" container
	I0315 21:15:33.961692   11164 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0315 21:15:34.382414   11164 cli_runner.go:164] Run: docker volume create embed-certs-348900 --label name.minikube.sigs.k8s.io=embed-certs-348900 --label created_by.minikube.sigs.k8s.io=true
	I0315 21:15:34.716016   11164 oci.go:103] Successfully created a docker volume embed-certs-348900
	I0315 21:15:34.727122   11164 cli_runner.go:164] Run: docker run --rm --name embed-certs-348900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-348900 --entrypoint /usr/bin/test -v embed-certs-348900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f -d /var/lib
	I0315 21:15:34.549401    3304 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=11fd2e5d7d4b8360c6d8a8b2c2e61a071aa8631e minikube.k8s.io/name=old-k8s-version-103800 minikube.k8s.io/updated_at=2023_03_15T21_15_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (4.0692845s)
	I0315 21:15:34.549401    3304 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (4.0735649s)
	I0315 21:15:34.575936    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:35.677911    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:36.689764    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:37.173919    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:37.680647    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:38.677808    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:36.012455    1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0315 21:15:36.012558    1332 retry.go:31] will retry after 307.806183ms: state is "Stopped"
	I0315 21:15:36.332781    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:15:38.718404   11164 cli_runner.go:217] Completed: docker run --rm --name embed-certs-348900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-348900 --entrypoint /usr/bin/test -v embed-certs-348900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f -d /var/lib: (3.9912367s)
	I0315 21:15:38.718694   11164 oci.go:107] Successfully prepared a docker volume embed-certs-348900
	I0315 21:15:38.718763   11164 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0315 21:15:38.718763   11164 kic.go:190] Starting extracting preloaded images to volume ...
	I0315 21:15:38.735548   11164 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-348900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f -I lz4 -xf /preloaded.tar -C /extractDir
	I0315 21:15:39.684705    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:40.178045    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:41.173794    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:41.681379    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:42.683323    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:43.182131    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:41.339223    1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0315 21:15:41.339409    1332 retry.go:31] will retry after 386.719795ms: state is "Stopped"
	I0315 21:15:41.739620    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:15:44.046130    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 21:15:44.046265    1332 retry.go:31] will retry after 731.95405ms: https://127.0.0.1:65165/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 21:15:44.784826    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:15:44.930024    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0315 21:15:44.930412    1332 kubeadm.go:608] needs reconfigure: apiserver error: https://127.0.0.1:65165/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0315 21:15:44.930412    1332 kubeadm.go:1120] stopping kube-system containers ...
	I0315 21:15:44.948612    1332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0315 21:15:45.444192    1332 docker.go:456] Stopping containers: [e3043962e5ef 6824568445c6 95e8431f8447 1f51fce69c22 c2ad60cad36d 0cb5567e32ab 51f04c53d355 a35da045d30f e92b1a5d6d0c e722cf7eda6b ed67a04efb8e 923853eff8e2 ac037b4a1329 ed570c25cf43 b0affa37d140 e5c85f584ed4 494a4383ddf0 aad97e15cb29 f5a744fc67d3 f03ec5c0e911 6b7373bd3644 d14ab3906f22 689b4ee40db7 c7d2681135fb 3ebfa7ac8c42 5f2ce6a254a2 f48bc2a716a0]
	I0315 21:15:45.468741    1332 ssh_runner.go:195] Run: docker stop e3043962e5ef 6824568445c6 95e8431f8447 1f51fce69c22 c2ad60cad36d 0cb5567e32ab 51f04c53d355 a35da045d30f e92b1a5d6d0c e722cf7eda6b ed67a04efb8e 923853eff8e2 ac037b4a1329 ed570c25cf43 b0affa37d140 e5c85f584ed4 494a4383ddf0 aad97e15cb29 f5a744fc67d3 f03ec5c0e911 6b7373bd3644 d14ab3906f22 689b4ee40db7 c7d2681135fb 3ebfa7ac8c42 5f2ce6a254a2 f48bc2a716a0
	I0315 21:15:44.191394    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:45.685532    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:15:48.821222    3304 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.1356462s)
	I0315 21:15:48.821384    3304 kubeadm.go:1073] duration metric: took 18.3714764s to wait for elevateKubeSystemPrivileges.
	I0315 21:15:48.821384    3304 kubeadm.go:403] StartCluster complete in 50.2400255s
	I0315 21:15:48.821513    3304 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:15:48.821905    3304 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0315 21:15:48.825059    3304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:15:48.828077    3304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0315 21:15:48.828077    3304 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0315 21:15:48.828879    3304 config.go:182] Loaded profile config "old-k8s-version-103800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0315 21:15:48.828800    3304 addons.go:66] Setting storage-provisioner=true in profile "old-k8s-version-103800"
	I0315 21:15:48.829118    3304 addons.go:66] Setting default-storageclass=true in profile "old-k8s-version-103800"
	I0315 21:15:48.829179    3304 addons.go:228] Setting addon storage-provisioner=true in "old-k8s-version-103800"
	I0315 21:15:48.829179    3304 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-103800"
	I0315 21:15:48.829300    3304 host.go:66] Checking if "old-k8s-version-103800" exists ...
	I0315 21:15:48.878494    3304 cli_runner.go:164] Run: docker container inspect old-k8s-version-103800 --format={{.State.Status}}
	I0315 21:15:48.879545    3304 cli_runner.go:164] Run: docker container inspect old-k8s-version-103800 --format={{.State.Status}}
	I0315 21:15:49.358108    3304 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 21:15:50.124359    4576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 21:15:50.223962    4576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (449 bytes)
	I0315 21:15:50.297658    4576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 21:15:50.374920    4576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0315 21:15:50.483223    4576 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0315 21:15:50.503211    4576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 21:15:50.560996    4576 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000 for IP: 192.168.85.2
	I0315 21:15:50.561164    4576 certs.go:186] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:15:50.561830    4576 certs.go:195] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0315 21:15:50.562026    4576 certs.go:195] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0315 21:15:50.562749    4576 certs.go:315] generating minikube-user signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.key
	I0315 21:15:50.562749    4576 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt with IP's: []
	I0315 21:15:49.456354    3304 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 21:15:49.456907    3304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 21:15:49.478318    3304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-103800
	I0315 21:15:49.848514    3304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65315 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\old-k8s-version-103800\id_rsa Username:docker}
	I0315 21:15:49.872375    3304 addons.go:228] Setting addon default-storageclass=true in "old-k8s-version-103800"
	I0315 21:15:49.872629    3304 host.go:66] Checking if "old-k8s-version-103800" exists ...
	I0315 21:15:49.901466    3304 cli_runner.go:164] Run: docker container inspect old-k8s-version-103800 --format={{.State.Status}}
	I0315 21:15:49.934700    3304 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.1066246s)
	I0315 21:15:49.936184    3304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0315 21:15:50.250574    3304 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 21:15:50.250698    3304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 21:15:50.264810    3304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-103800
	I0315 21:15:50.363018    3304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 21:15:50.573127    3304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65315 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\old-k8s-version-103800\id_rsa Username:docker}
	I0315 21:15:51.185346    3304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 21:15:52.041249    3304 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-103800" context rescaled to 1 replicas
	I0315 21:15:52.041249    3304 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0315 21:15:52.050693    3304 out.go:177] * Verifying Kubernetes components...
	I0315 21:15:52.068989    3304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 21:15:52.931105    3304 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.994746s)
	I0315 21:15:52.931105    3304 start.go:921] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS's ConfigMap
	I0315 21:15:53.543980    3304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.1809688s)
	I0315 21:15:53.543980    3304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.3586386s)
	I0315 21:15:53.543980    3304 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.4749945s)
	I0315 21:15:53.547333    3304 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0315 21:15:53.551130    3304 addons.go:499] enable addons completed in 4.7230615s: enabled=[storage-provisioner default-storageclass]
	I0315 21:15:53.562222    3304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-103800
	I0315 21:15:53.866492    3304 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-103800" to be "Ready" ...
	I0315 21:15:53.933789    3304 node_ready.go:49] node "old-k8s-version-103800" has status "Ready":"True"
	I0315 21:15:53.933928    3304 node_ready.go:38] duration metric: took 67.3813ms waiting for node "old-k8s-version-103800" to be "Ready" ...
	I0315 21:15:53.933978    3304 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 21:15:53.954978    3304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-t9nj9" in "kube-system" namespace to be "Ready" ...
	I0315 21:15:55.263337    1332 ssh_runner.go:235] Completed: docker stop e3043962e5ef 6824568445c6 95e8431f8447 1f51fce69c22 c2ad60cad36d 0cb5567e32ab 51f04c53d355 a35da045d30f e92b1a5d6d0c e722cf7eda6b ed67a04efb8e 923853eff8e2 ac037b4a1329 ed570c25cf43 b0affa37d140 e5c85f584ed4 494a4383ddf0 aad97e15cb29 f5a744fc67d3 f03ec5c0e911 6b7373bd3644 d14ab3906f22 689b4ee40db7 c7d2681135fb 3ebfa7ac8c42 5f2ce6a254a2 f48bc2a716a0: (9.7945662s)
	I0315 21:15:55.280007    1332 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0315 21:15:50.791437    4576 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt ...
	I0315 21:15:50.811528    4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt: {Name:mk1a7714c10c13a7d5c8fb1098bc038f605ad5c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:15:50.813206    4576 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.key ...
	I0315 21:15:50.813206    4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.key: {Name:mk6d5b75048bc1f92c0f990335a0e77ae990113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:15:50.814115    4576 certs.go:315] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.key.43b9df8c
	I0315 21:15:50.814711    4576 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0315 21:15:51.462758    4576 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.crt.43b9df8c ...
	I0315 21:15:51.462758    4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.crt.43b9df8c: {Name:mkbe5d6759390ded2e92d33f951b55651f871d6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:15:51.465635    4576 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.key.43b9df8c ...
	I0315 21:15:51.465635    4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.key.43b9df8c: {Name:mkeabc19ce40a151a2335523f300cb2173b405a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:15:51.465984    4576 certs.go:333] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.crt.43b9df8c -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.crt
	I0315 21:15:51.467767    4576 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.key.43b9df8c -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.key
	I0315 21:15:51.475866    4576 certs.go:315] generating aggregator signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.key
	I0315 21:15:51.475866    4576 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.crt with IP's: []
	I0315 21:15:51.587728    4576 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.crt ...
	I0315 21:15:51.587834    4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.crt: {Name:mk7c62a1dda77e6dc05d2537ac317544e81f57a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:15:51.589765    4576 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.key ...
	I0315 21:15:51.589848    4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.key: {Name:mk8190fc7ddb34a4dc4e27e4845c7aee9bb89866 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:15:51.598260    4576 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812.pem (1338 bytes)
	W0315 21:15:51.600164    4576 certs.go:397] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812_empty.pem, impossibly tiny 0 bytes
	I0315 21:15:51.600164    4576 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0315 21:15:51.600164    4576 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0315 21:15:51.600849    4576 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0315 21:15:51.600849    4576 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0315 21:15:51.601444    4576 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem (1708 bytes)
	I0315 21:15:51.603533    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0315 21:15:51.706046    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 21:15:51.773521    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 21:15:51.835553    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 21:15:51.896596    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 21:15:51.961384    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 21:15:52.020772    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 21:15:52.161594    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 21:15:52.223729    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 21:15:52.295451    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812.pem --> /usr/share/ca-certificates/8812.pem (1338 bytes)
	I0315 21:15:52.368796    4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem --> /usr/share/ca-certificates/88122.pem (1708 bytes)
	I0315 21:15:52.440447    4576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 21:15:52.501633    4576 ssh_runner.go:195] Run: openssl version
	I0315 21:15:52.539319    4576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8812.pem && ln -fs /usr/share/ca-certificates/8812.pem /etc/ssl/certs/8812.pem"
	I0315 21:15:52.596897    4576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8812.pem
	I0315 21:15:52.617219    4576 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 15 20:10 /usr/share/ca-certificates/8812.pem
	I0315 21:15:52.634012    4576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8812.pem
	I0315 21:15:52.676116    4576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8812.pem /etc/ssl/certs/51391683.0"
	I0315 21:15:52.732985    4576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88122.pem && ln -fs /usr/share/ca-certificates/88122.pem /etc/ssl/certs/88122.pem"
	I0315 21:15:52.795424    4576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88122.pem
	I0315 21:15:52.811657    4576 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 15 20:10 /usr/share/ca-certificates/88122.pem
	I0315 21:15:52.824204    4576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88122.pem
	I0315 21:15:52.868586    4576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/88122.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 21:15:52.920203    4576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 21:15:52.980456    4576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 21:15:52.999359    4576 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 15 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I0315 21:15:53.012117    4576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 21:15:53.068045    4576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 21:15:53.097602    4576 kubeadm.go:401] StartCluster: {Name:no-preload-470000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:no-preload-470000 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0315 21:15:53.106935    4576 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0315 21:15:53.188443    4576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0315 21:15:53.248153    4576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 21:15:53.292225    4576 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0315 21:15:53.310023    4576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 21:15:53.350373    4576 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 21:15:53.350373    4576 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0315 21:15:53.480709    4576 kubeadm.go:322] W0315 21:15:53.477710    2248 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0315 21:15:53.619484    4576 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0315 21:15:53.941137    4576 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 21:15:56.130859    3304 pod_ready.go:102] pod "coredns-5644d7b6d9-t9nj9" in "kube-system" namespace has status "Ready":"False"
	I0315 21:15:58.590590    3304 pod_ready.go:102] pod "coredns-5644d7b6d9-t9nj9" in "kube-system" namespace has status "Ready":"False"
	I0315 21:15:55.667015    1332 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 21:15:55.884955    1332 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Mar 15 21:13 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Mar 15 21:13 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Mar 15 21:14 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Mar 15 21:13 /etc/kubernetes/scheduler.conf
	
	I0315 21:15:55.906317    1332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 21:15:55.970490    1332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 21:15:56.077831    1332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 21:15:56.164837    1332 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:56.189369    1332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 21:15:56.278633    1332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 21:15:56.350783    1332 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0315 21:15:56.368651    1332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 21:15:56.472488    1332 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 21:15:56.554151    1332 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0315 21:15:56.554288    1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 21:15:56.838520    1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 21:15:58.821631    1332 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.9831146s)
	I0315 21:15:58.821631    1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0315 21:15:59.241679    1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 21:15:59.531884    1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0315 21:15:59.837145    1332 api_server.go:51] waiting for apiserver process to appear ...
	I0315 21:15:59.862394    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:00.562737    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:01.081569    3304 pod_ready.go:102] pod "coredns-5644d7b6d9-t9nj9" in "kube-system" namespace has status "Ready":"False"
	I0315 21:16:03.528471    3304 pod_ready.go:92] pod "coredns-5644d7b6d9-t9nj9" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:03.528551    3304 pod_ready.go:81] duration metric: took 9.5735907s waiting for pod "coredns-5644d7b6d9-t9nj9" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:03.528551    3304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cfcpx" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:03.557031    3304 pod_ready.go:92] pod "kube-proxy-cfcpx" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:03.557086    3304 pod_ready.go:81] duration metric: took 28.5355ms waiting for pod "kube-proxy-cfcpx" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:03.557086    3304 pod_ready.go:38] duration metric: took 9.623095s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 21:16:03.557194    3304 api_server.go:51] waiting for apiserver process to appear ...
	I0315 21:16:03.572979    3304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:03.613975    3304 api_server.go:71] duration metric: took 11.5727472s to wait for apiserver process to appear ...
	I0315 21:16:03.613975    3304 api_server.go:87] waiting for apiserver healthz status ...
	I0315 21:16:03.613975    3304 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65314/healthz ...
	I0315 21:16:03.643577    3304 api_server.go:278] https://127.0.0.1:65314/healthz returned 200:
	ok
	I0315 21:16:03.656457    3304 api_server.go:140] control plane version: v1.16.0
	I0315 21:16:03.656457    3304 api_server.go:130] duration metric: took 42.4823ms to wait for apiserver health ...
	I0315 21:16:03.656537    3304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 21:16:03.667107    3304 system_pods.go:59] 3 kube-system pods found
	I0315 21:16:03.667180    3304 system_pods.go:61] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:03.667180    3304 system_pods.go:61] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:03.667180    3304 system_pods.go:61] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:03.667180    3304 system_pods.go:74] duration metric: took 10.5957ms to wait for pod list to return data ...
	I0315 21:16:03.667180    3304 default_sa.go:34] waiting for default service account to be created ...
	I0315 21:16:03.676892    3304 default_sa.go:45] found service account: "default"
	I0315 21:16:03.677053    3304 default_sa.go:55] duration metric: took 9.8734ms for default service account to be created ...
	I0315 21:16:03.677104    3304 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 21:16:01.047261    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:01.561853    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:02.057572    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:02.554491    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:03.060987    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:03.560744    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:04.058096    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:04.574094    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:05.054883    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:05.558867    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:04.285721    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:04.285721    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:04.285721    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:04.285721    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:04.285721    3304 retry.go:31] will retry after 219.526595ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:04.529762    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:04.529762    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:04.529762    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:04.529762    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:04.529762    3304 retry.go:31] will retry after 379.322135ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:04.941567    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:04.941567    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:04.941567    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:04.941567    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:04.941567    3304 retry.go:31] will retry after 439.394592ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:05.410063    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:05.410190    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:05.410190    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:05.410246    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:05.410246    3304 retry.go:31] will retry after 547.53451ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:05.971998    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:05.971998    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:05.971998    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:05.971998    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:05.971998    3304 retry.go:31] will retry after 474.225372ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:06.466534    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:06.466718    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:06.466718    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:06.466718    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:06.466718    3304 retry.go:31] will retry after 680.585019ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:07.175871    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:07.175871    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:07.175871    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:07.175871    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:07.175871    3304 retry.go:31] will retry after 979.191711ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:08.550247    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:08.550247    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:08.550247    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:08.550247    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:08.550247    3304 retry.go:31] will retry after 1.232453731s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:06.064030    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:06.559451    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:06.836193    1332 api_server.go:71] duration metric: took 6.999061s to wait for apiserver process to appear ...
	I0315 21:16:06.836348    1332 api_server.go:87] waiting for apiserver healthz status ...
	I0315 21:16:06.836472    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:06.844702    1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": EOF
	I0315 21:16:07.349930    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:07.360047    1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": EOF
	I0315 21:16:07.852770    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:09.202438   11164 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-348900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f -I lz4 -xf /preloaded.tar -C /extractDir: (30.466496s)
	I0315 21:16:09.202651   11164 kic.go:199] duration metric: took 30.483946 seconds to extract preloaded images to volume
	I0315 21:16:09.210313   11164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 21:16:10.155940   11164 info.go:266] docker info: {ID:5XVN:YLWI:D57U:VRY6:Z2T2:XT44:UTQY:SUTG:X4EL:3KBQ:R56A:SLJU Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:85 OomKillDisable:true NGoroutines:71 SystemTime:2023-03-15 21:16:09.4164826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path:C:\Program Files\Docker\cli-plu
gins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0315 21:16:10.165464   11164 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0315 21:16:11.073846   11164 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-348900 --name embed-certs-348900 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-348900 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-348900 --network embed-certs-348900 --ip 192.168.67.2 --volume embed-certs-348900:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f
	I0315 21:16:12.556246   11164 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-348900 --name embed-certs-348900 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-348900 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-348900 --network embed-certs-348900 --ip 192.168.67.2 --volume embed-certs-348900:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f: (1.4822642s)
	I0315 21:16:12.573402   11164 cli_runner.go:164] Run: docker container inspect embed-certs-348900 --format={{.State.Running}}
	I0315 21:16:12.899930   11164 cli_runner.go:164] Run: docker container inspect embed-certs-348900 --format={{.State.Status}}
	I0315 21:16:13.219648   11164 cli_runner.go:164] Run: docker exec embed-certs-348900 stat /var/lib/dpkg/alternatives/iptables
	I0315 21:16:09.817018    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:09.817099    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:09.817128    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:09.817171    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:09.817212    3304 retry.go:31] will retry after 1.174345338s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:11.034520    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:11.034666    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:11.034666    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:11.034666    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:11.034865    3304 retry.go:31] will retry after 1.617952037s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:12.678044    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:12.678093    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:12.678161    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:12.678161    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:12.678161    3304 retry.go:31] will retry after 2.664928648s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:12.856341    1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0315 21:16:13.355164    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:13.531052    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0315 21:16:13.531052    1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0315 21:16:13.856894    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:13.948093    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0315 21:16:13.948207    1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0315 21:16:14.353756    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:14.444021    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0315 21:16:14.444582    1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0315 21:16:14.850032    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:14.881729    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0315 21:16:14.881822    1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0315 21:16:15.359619    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:15.458273    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0315 21:16:15.458359    1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0315 21:16:15.846895    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:15.875897    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 200:
	ok
	I0315 21:16:15.909269    1332 api_server.go:140] control plane version: v1.26.2
	I0315 21:16:15.909297    1332 api_server.go:130] duration metric: took 9.0729659s to wait for apiserver health ...
	I0315 21:16:15.909353    1332 cni.go:84] Creating CNI manager for ""
	I0315 21:16:15.909353    1332 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0315 21:16:15.912744    1332 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 21:16:13.756342   11164 oci.go:144] the created container "embed-certs-348900" has a running status.
	I0315 21:16:13.756477   11164 kic.go:221] Creating ssh key for kic: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa...
	I0315 21:16:14.119932   11164 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0315 21:16:14.639346   11164 cli_runner.go:164] Run: docker container inspect embed-certs-348900 --format={{.State.Status}}
	I0315 21:16:14.940713   11164 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0315 21:16:14.940713   11164 kic_runner.go:114] Args: [docker exec --privileged embed-certs-348900 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0315 21:16:15.500441   11164 kic.go:261] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa...
	I0315 21:16:16.178648   11164 cli_runner.go:164] Run: docker container inspect embed-certs-348900 --format={{.State.Status}}
	I0315 21:16:16.488888   11164 machine.go:88] provisioning docker machine ...
	I0315 21:16:16.488888   11164 ubuntu.go:169] provisioning hostname "embed-certs-348900"
	I0315 21:16:16.502911   11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
	I0315 21:16:16.840113   11164 main.go:141] libmachine: Using SSH client type: native
	I0315 21:16:16.856244   11164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil>  [] 0s} 127.0.0.1 65481 <nil> <nil>}
	I0315 21:16:16.856277   11164 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-348900 && echo "embed-certs-348900" | sudo tee /etc/hostname
	I0315 21:16:17.147013   11164 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-348900
	
	I0315 21:16:17.160758   11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
	I0315 21:16:17.464133   11164 main.go:141] libmachine: Using SSH client type: native
	I0315 21:16:17.465429   11164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil>  [] 0s} 127.0.0.1 65481 <nil> <nil>}
	I0315 21:16:17.465429   11164 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-348900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-348900/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-348900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 21:16:17.739135   11164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 21:16:17.739135   11164 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0315 21:16:17.739135   11164 ubuntu.go:177] setting up certificates
	I0315 21:16:17.739135   11164 provision.go:83] configureAuth start
	I0315 21:16:17.755889   11164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-348900
	I0315 21:16:18.035724   11164 provision.go:138] copyHostCerts
	I0315 21:16:18.036560   11164 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0315 21:16:18.036560   11164 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0315 21:16:18.037267   11164 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0315 21:16:18.038895   11164 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0315 21:16:18.038895   11164 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0315 21:16:18.039720   11164 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0315 21:16:18.041165   11164 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0315 21:16:18.041165   11164 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0315 21:16:18.041925   11164 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0315 21:16:18.042745   11164 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.embed-certs-348900 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-348900]
	I0315 21:16:15.383021    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:15.383097    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:15.383222    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:15.383222    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:15.383288    3304 retry.go:31] will retry after 2.578717787s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:17.995544    3304 system_pods.go:86] 3 kube-system pods found
	I0315 21:16:17.995544    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:17.995544    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:17.995544    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:17.997123    3304 retry.go:31] will retry after 3.689658526s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:15.925415    1332 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 21:16:15.965847    1332 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 21:16:16.079955    1332 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 21:16:16.096342    1332 system_pods.go:59] 6 kube-system pods found
	I0315 21:16:16.096342    1332 system_pods.go:61] "coredns-787d4945fb-2q246" [13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0315 21:16:16.096342    1332 system_pods.go:61] "etcd-pause-073300" [08b62e5b-2e8e-45a6-976f-51c9524724a0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0315 21:16:16.096342    1332 system_pods.go:61] "kube-apiserver-pause-073300" [f7f5b883-f6de-4ad7-adc7-c48ad03ab3c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0315 21:16:16.096342    1332 system_pods.go:61] "kube-controller-manager-pause-073300" [2691065d-e6be-4ff6-902d-6d474453c5e9] Running
	I0315 21:16:16.096342    1332 system_pods.go:61] "kube-proxy-m4md5" [428ae579-2b68-4526-a2b0-d8bb5922870f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0315 21:16:16.096342    1332 system_pods.go:61] "kube-scheduler-pause-073300" [0cdbd626-152a-47fb-a2d9-08d22e639996] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0315 21:16:16.096342    1332 system_pods.go:74] duration metric: took 16.2168ms to wait for pod list to return data ...
	I0315 21:16:16.096342    1332 node_conditions.go:102] verifying NodePressure condition ...
	I0315 21:16:16.105140    1332 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0315 21:16:16.105226    1332 node_conditions.go:123] node cpu capacity is 16
	I0315 21:16:16.105269    1332 node_conditions.go:105] duration metric: took 8.8846ms to run NodePressure ...
	I0315 21:16:16.105316    1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0315 21:16:17.333440    1332 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.2280887s)
	I0315 21:16:17.333615    1332 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0315 21:16:17.354686    1332 kubeadm.go:784] kubelet initialised
	I0315 21:16:17.354754    1332 kubeadm.go:785] duration metric: took 21.1391ms waiting for restarted kubelet to initialise ...
	I0315 21:16:17.354822    1332 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 21:16:17.435085    1332 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-2q246" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:19.521467    1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
	I0315 21:16:18.251532   11164 provision.go:172] copyRemoteCerts
	I0315 21:16:18.273974   11164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 21:16:18.283506   11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
	I0315 21:16:18.570902   11164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65481 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa Username:docker}
	I0315 21:16:18.768649   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0315 21:16:18.841686   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0315 21:16:18.905617   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 21:16:18.967699   11164 provision.go:86] duration metric: configureAuth took 1.2285308s
	I0315 21:16:18.967770   11164 ubuntu.go:193] setting minikube options for container-runtime
	I0315 21:16:18.968727   11164 config.go:182] Loaded profile config "embed-certs-348900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0315 21:16:18.979877   11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
	I0315 21:16:19.285905   11164 main.go:141] libmachine: Using SSH client type: native
	I0315 21:16:19.286914   11164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil>  [] 0s} 127.0.0.1 65481 <nil> <nil>}
	I0315 21:16:19.286979   11164 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0315 21:16:19.567687   11164 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0315 21:16:19.567687   11164 ubuntu.go:71] root file system type: overlay
	I0315 21:16:19.567687   11164 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0315 21:16:19.582813   11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
	I0315 21:16:19.874162   11164 main.go:141] libmachine: Using SSH client type: native
	I0315 21:16:19.875396   11164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil>  [] 0s} 127.0.0.1 65481 <nil> <nil>}
	I0315 21:16:19.875396   11164 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0315 21:16:20.174872   11164 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0315 21:16:20.188182   11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
	I0315 21:16:20.453718   11164 main.go:141] libmachine: Using SSH client type: native
	I0315 21:16:20.454944   11164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil>  [] 0s} 127.0.0.1 65481 <nil> <nil>}
	I0315 21:16:20.454944   11164 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0315 21:16:22.142486   11164 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-15 21:16:20.152689000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0315 21:16:22.142486   11164 machine.go:91] provisioned docker machine in 5.6536091s
	I0315 21:16:22.142486   11164 client.go:171] LocalClient.Create took 50.2614576s
	I0315 21:16:22.142486   11164 start.go:167] duration metric: libmachine.API.Create for "embed-certs-348900" took 50.2615841s
	I0315 21:16:22.142486   11164 start.go:300] post-start starting for "embed-certs-348900" (driver="docker")
	I0315 21:16:22.142486   11164 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 21:16:22.164869   11164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 21:16:22.176134   11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
	I0315 21:16:22.457317   11164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65481 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa Username:docker}
	I0315 21:16:22.664346   11164 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 21:16:22.686266   11164 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0315 21:16:22.686266   11164 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0315 21:16:22.686266   11164 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0315 21:16:22.686266   11164 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0315 21:16:22.686266   11164 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0315 21:16:22.686902   11164 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0315 21:16:22.688699   11164 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem -> 88122.pem in /etc/ssl/certs
	I0315 21:16:22.706595   11164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 21:16:22.738368   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem --> /etc/ssl/certs/88122.pem (1708 bytes)
	I0315 21:16:22.808162   11164 start.go:303] post-start completed in 665.6768ms
	I0315 21:16:22.820367   11164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-348900
	I0315 21:16:23.085450   11164 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\config.json ...
	I0315 21:16:23.099327   11164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 21:16:23.105640   11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
	I0315 21:16:21.705945    3304 system_pods.go:86] 4 kube-system pods found
	I0315 21:16:21.706010    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:21.706103    3304 system_pods.go:89] "etcd-old-k8s-version-103800" [177eccf1-ef20-41f5-9031-eca4485bea7b] Pending
	I0315 21:16:21.706103    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:21.706185    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:21.706219    3304 retry.go:31] will retry after 5.083561084s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:22.006711    1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
	I0315 21:16:24.016700    1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
	I0315 21:16:23.396840   11164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65481 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa Username:docker}
	I0315 21:16:23.581013   11164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0315 21:16:23.600663   11164 start.go:128] duration metric: createHost completed in 51.7244434s
	I0315 21:16:23.600663   11164 start.go:83] releasing machines lock for "embed-certs-348900", held for 51.7253337s
	I0315 21:16:23.612591   11164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-348900
	I0315 21:16:23.883432   11164 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 21:16:23.894275   11164 ssh_runner.go:195] Run: cat /version.json
	I0315 21:16:23.894535   11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
	I0315 21:16:23.897398   11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
	I0315 21:16:24.187980   11164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65481 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa Username:docker}
	I0315 21:16:24.211376   11164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65481 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa Username:docker}
	I0315 21:16:24.384184   11164 ssh_runner.go:195] Run: systemctl --version
	I0315 21:16:24.554870   11164 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0315 21:16:24.601965   11164 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0315 21:16:24.636442   11164 start.go:407] unable to name loopback interface in dockerConfigureNetworkPlugin: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0315 21:16:24.653193   11164 ssh_runner.go:195] Run: which cri-dockerd
	I0315 21:16:24.687918   11164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0315 21:16:24.720950   11164 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0315 21:16:24.782057   11164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 21:16:24.838659   11164 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0315 21:16:24.838782   11164 start.go:485] detecting cgroup driver to use...
	I0315 21:16:24.838782   11164 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0315 21:16:24.839372   11164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 21:16:24.907810   11164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0315 21:16:24.962942   11164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0315 21:16:24.999607   11164 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0315 21:16:25.016372   11164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0315 21:16:25.084691   11164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0315 21:16:25.123717   11164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0315 21:16:25.175564   11164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0315 21:16:25.220146   11164 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 21:16:25.283915   11164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0315 21:16:25.334938   11164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 21:16:25.388356   11164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 21:16:25.435298   11164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 21:16:25.641460   11164 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0315 21:16:25.860833   11164 start.go:485] detecting cgroup driver to use...
	I0315 21:16:25.861441   11164 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0315 21:16:25.882735   11164 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0315 21:16:25.939579   11164 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0315 21:16:25.960420   11164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0315 21:16:26.059890   11164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 21:16:26.183579   11164 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0315 21:16:26.466649   11164 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0315 21:16:26.677013   11164 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0315 21:16:26.677080   11164 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0315 21:16:26.756071   11164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 21:16:26.959814   11164 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0315 21:16:27.700313   11164 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0315 21:16:27.915578   11164 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0315 21:16:28.148265   11164 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0315 21:16:26.834333    3304 system_pods.go:86] 5 kube-system pods found
	I0315 21:16:26.834442    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:26.834494    3304 system_pods.go:89] "etcd-old-k8s-version-103800" [177eccf1-ef20-41f5-9031-eca4485bea7b] Running
	I0315 21:16:26.834494    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:26.834542    3304 system_pods.go:89] "kube-scheduler-old-k8s-version-103800" [2c673315-0d1e-4a5d-a5d7-738e38d7cf84] Pending
	I0315 21:16:26.834542    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:26.834542    3304 retry.go:31] will retry after 6.853083205s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:29.227662    4576 kubeadm.go:322] [init] Using Kubernetes version: v1.26.2
	I0315 21:16:29.227763    4576 kubeadm.go:322] [preflight] Running pre-flight checks
	I0315 21:16:29.227763    4576 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 21:16:29.227763    4576 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 21:16:29.227763    4576 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 21:16:29.229013    4576 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 21:16:29.233640    4576 out.go:204]   - Generating certificates and keys ...
	I0315 21:16:29.234315    4576 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0315 21:16:29.234315    4576 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0315 21:16:29.234315    4576 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0315 21:16:29.234862    4576 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0315 21:16:29.235050    4576 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0315 21:16:29.235155    4576 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0315 21:16:29.235331    4576 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0315 21:16:29.235774    4576 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-470000] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0315 21:16:29.235871    4576 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0315 21:16:29.235871    4576 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-470000] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0315 21:16:29.236566    4576 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0315 21:16:29.236865    4576 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0315 21:16:29.237080    4576 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0315 21:16:29.237437    4576 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 21:16:29.237659    4576 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 21:16:29.237841    4576 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 21:16:29.238095    4576 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 21:16:29.238325    4576 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 21:16:29.238639    4576 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 21:16:29.238966    4576 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 21:16:29.239000    4576 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0315 21:16:29.239299    4576 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 21:16:29.244122    4576 out.go:204]   - Booting up control plane ...
	I0315 21:16:29.244122    4576 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 21:16:29.244122    4576 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 21:16:29.244875    4576 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 21:16:29.245231    4576 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 21:16:29.245856    4576 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 21:16:29.246514    4576 kubeadm.go:322] [apiclient] All control plane components are healthy after 27.005043 seconds
	I0315 21:16:29.247464    4576 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 21:16:29.247889    4576 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 21:16:29.247889    4576 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 21:16:29.249317    4576 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-470000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 21:16:29.249647    4576 kubeadm.go:322] [bootstrap-token] Using token: g8jwe6.dtydkfj8fkgcjwxk
	I0315 21:16:29.253362    4576 out.go:204]   - Configuring RBAC rules ...
	I0315 21:16:29.253362    4576 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 21:16:29.253982    4576 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 21:16:29.254534    4576 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 21:16:29.254971    4576 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 21:16:29.255290    4576 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 21:16:29.255767    4576 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 21:16:29.256101    4576 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 21:16:29.256445    4576 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0315 21:16:29.256697    4576 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0315 21:16:29.256697    4576 kubeadm.go:322] 
	I0315 21:16:29.256697    4576 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0315 21:16:29.256697    4576 kubeadm.go:322] 
	I0315 21:16:29.256697    4576 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0315 21:16:29.257255    4576 kubeadm.go:322] 
	I0315 21:16:29.257312    4576 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0315 21:16:29.257312    4576 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 21:16:29.258206    4576 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 21:16:29.258206    4576 kubeadm.go:322] 
	I0315 21:16:29.258392    4576 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0315 21:16:29.258392    4576 kubeadm.go:322] 
	I0315 21:16:29.258392    4576 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 21:16:29.258392    4576 kubeadm.go:322] 
	I0315 21:16:29.259028    4576 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0315 21:16:29.259028    4576 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 21:16:29.259028    4576 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 21:16:29.259586    4576 kubeadm.go:322] 
	I0315 21:16:29.259793    4576 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 21:16:29.259793    4576 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0315 21:16:29.259793    4576 kubeadm.go:322] 
	I0315 21:16:29.260469    4576 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token g8jwe6.dtydkfj8fkgcjwxk \
	I0315 21:16:29.260726    4576 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:bbf210a1ce3ae6ed86699fbddc86294be9a5c7abc143d537001f0a224592f716 \
	I0315 21:16:29.260890    4576 kubeadm.go:322] 	--control-plane 
	I0315 21:16:29.260890    4576 kubeadm.go:322] 
	I0315 21:16:29.261169    4576 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0315 21:16:29.261228    4576 kubeadm.go:322] 
	I0315 21:16:29.261412    4576 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token g8jwe6.dtydkfj8fkgcjwxk \
	I0315 21:16:29.261412    4576 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:bbf210a1ce3ae6ed86699fbddc86294be9a5c7abc143d537001f0a224592f716 
	I0315 21:16:29.261412    4576 cni.go:84] Creating CNI manager for ""
	I0315 21:16:29.261412    4576 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0315 21:16:29.266347    4576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 21:16:28.373729   11164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 21:16:28.596843   11164 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0315 21:16:28.641503   11164 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0315 21:16:28.659715   11164 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0315 21:16:28.687449   11164 start.go:553] Will wait 60s for crictl version
	I0315 21:16:28.704098   11164 ssh_runner.go:195] Run: which crictl
	I0315 21:16:28.753769   11164 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 21:16:29.076356   11164 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0315 21:16:29.092004   11164 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0315 21:16:29.211116   11164 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0315 21:16:26.048179    1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
	I0315 21:16:28.050667    1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
	I0315 21:16:29.001447    1332 pod_ready.go:92] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:29.001447    1332 pod_ready.go:81] duration metric: took 11.5663842s waiting for pod "coredns-787d4945fb-2q246" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.001447    1332 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.028330    1332 pod_ready.go:92] pod "etcd-pause-073300" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:29.028330    1332 pod_ready.go:81] duration metric: took 26.8832ms waiting for pod "etcd-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.028330    1332 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.057628    1332 pod_ready.go:92] pod "kube-apiserver-pause-073300" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:29.057628    1332 pod_ready.go:81] duration metric: took 29.2978ms waiting for pod "kube-apiserver-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.057628    1332 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.092004    1332 pod_ready.go:92] pod "kube-controller-manager-pause-073300" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:29.092004    1332 pod_ready.go:81] duration metric: took 34.3758ms waiting for pod "kube-controller-manager-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.092004    1332 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-m4md5" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.131434    1332 pod_ready.go:92] pod "kube-proxy-m4md5" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:29.131486    1332 pod_ready.go:81] duration metric: took 39.482ms waiting for pod "kube-proxy-m4md5" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.131486    1332 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.402295    1332 pod_ready.go:92] pod "kube-scheduler-pause-073300" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:29.402345    1332 pod_ready.go:81] duration metric: took 270.8098ms waiting for pod "kube-scheduler-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.402345    1332 pod_ready.go:38] duration metric: took 12.0475003s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 21:16:29.402386    1332 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 21:16:29.426130    1332 ops.go:34] apiserver oom_adj: -16
	I0315 21:16:29.426187    1332 kubeadm.go:637] restartCluster took 1m4.338895s
	I0315 21:16:29.426266    1332 kubeadm.go:403] StartCluster complete in 1m4.4532784s
	I0315 21:16:29.426351    1332 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:16:29.426601    1332 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0315 21:16:29.429857    1332 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:16:29.432982    1332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0315 21:16:29.432982    1332 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0315 21:16:29.433680    1332 config.go:182] Loaded profile config "pause-073300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0315 21:16:29.438415    1332 out.go:177] * Enabled addons: 
	I0315 21:16:29.443738    1332 addons.go:499] enable addons completed in 10.8462ms: enabled=[]
	I0315 21:16:29.452842    1332 kapi.go:59] client config for pause-073300: &rest.Config{Host:"https://127.0.0.1:65165", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-073300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-073300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1deb720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0315 21:16:29.467764    1332 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-073300" context rescaled to 1 replicas
	I0315 21:16:29.467764    1332 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0315 21:16:29.470858    1332 out.go:177] * Verifying Kubernetes components...
	I0315 21:16:29.484573    1332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 21:16:29.761590    1332 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0315 21:16:29.775423    1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-073300
	I0315 21:16:30.117208    1332 node_ready.go:35] waiting up to 6m0s for node "pause-073300" to be "Ready" ...
	I0315 21:16:30.134817    1332 node_ready.go:49] node "pause-073300" has status "Ready":"True"
	I0315 21:16:30.134886    1332 node_ready.go:38] duration metric: took 17.4789ms waiting for node "pause-073300" to be "Ready" ...
	I0315 21:16:30.135066    1332 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 21:16:30.162562    1332 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-2q246" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:30.219441    1332 pod_ready.go:92] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:30.219583    1332 pod_ready.go:81] duration metric: took 57.0207ms waiting for pod "coredns-787d4945fb-2q246" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:30.219583    1332 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:30.608418    1332 pod_ready.go:92] pod "etcd-pause-073300" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:30.608458    1332 pod_ready.go:81] duration metric: took 388.876ms waiting for pod "etcd-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:30.608458    1332 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:29.286357    4576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 21:16:29.434851    4576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 21:16:29.759117    4576 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 21:16:29.777121    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:29.784090    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=11fd2e5d7d4b8360c6d8a8b2c2e61a071aa8631e minikube.k8s.io/name=no-preload-470000 minikube.k8s.io/updated_at=2023_03_15T21_16_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:29.333720   11164 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 23.0.1 ...
	I0315 21:16:29.346161   11164 cli_runner.go:164] Run: docker exec -t embed-certs-348900 dig +short host.docker.internal
	I0315 21:16:29.900879   11164 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0315 21:16:29.916562   11164 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0315 21:16:29.935552   11164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 21:16:29.995136   11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-348900
	I0315 21:16:30.338304   11164 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0315 21:16:30.350351   11164 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0315 21:16:30.410968   11164 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.2
	registry.k8s.io/kube-controller-manager:v1.26.2
	registry.k8s.io/kube-scheduler:v1.26.2
	registry.k8s.io/kube-proxy:v1.26.2
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0315 21:16:30.410997   11164 docker.go:560] Images already preloaded, skipping extraction
	I0315 21:16:30.423332   11164 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0315 21:16:30.503657   11164 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.2
	registry.k8s.io/kube-scheduler:v1.26.2
	registry.k8s.io/kube-controller-manager:v1.26.2
	registry.k8s.io/kube-proxy:v1.26.2
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0315 21:16:30.503657   11164 cache_images.go:84] Images are preloaded, skipping loading
	I0315 21:16:30.514842   11164 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0315 21:16:30.592454   11164 cni.go:84] Creating CNI manager for ""
	I0315 21:16:30.593071   11164 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0315 21:16:30.593126   11164 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0315 21:16:30.593164   11164 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-348900 NodeName:embed-certs-348900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0315 21:16:30.593164   11164 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-348900"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 21:16:30.593164   11164 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-348900 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.2 ClusterName:embed-certs-348900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0315 21:16:30.608458   11164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
	I0315 21:16:30.650429   11164 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 21:16:30.663574   11164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 21:16:30.692787   11164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
	I0315 21:16:30.740392   11164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 21:16:30.785258   11164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0315 21:16:30.856683   11164 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0315 21:16:30.874232   11164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 21:16:30.910227   11164 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900 for IP: 192.168.67.2
	I0315 21:16:30.910227   11164 certs.go:186] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:16:30.910959   11164 certs.go:195] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0315 21:16:30.910959   11164 certs.go:195] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0315 21:16:30.912090   11164 certs.go:315] generating minikube-user signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\client.key
	I0315 21:16:30.912245   11164 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\client.crt with IP's: []
	I0315 21:16:31.176322   11164 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\client.crt ...
	I0315 21:16:31.176322   11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\client.crt: {Name:mk3adaad25efd04206f4069d51ba11c764eb6365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:16:31.185180   11164 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\client.key ...
	I0315 21:16:31.186710   11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\client.key: {Name:mkf9f54f56133eba18d6e348fef5a1556121e000 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:16:31.186988   11164 certs.go:315] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.key.c7fa3a9e
	I0315 21:16:31.187994   11164 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0315 21:16:31.980645   11164 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.crt.c7fa3a9e ...
	I0315 21:16:31.980645   11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.crt.c7fa3a9e: {Name:mk2261dfadf80693084f767fa62cccae0b07268d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:16:31.987167   11164 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.key.c7fa3a9e ...
	I0315 21:16:31.987167   11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.key.c7fa3a9e: {Name:mk003ae0b84dcfe7543e40c97ad15121d53cc917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:16:31.988356   11164 certs.go:333] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.crt.c7fa3a9e -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.crt
	I0315 21:16:31.999575   11164 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.key.c7fa3a9e -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.key
	I0315 21:16:32.001372   11164 certs.go:315] generating aggregator signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.key
	I0315 21:16:32.001790   11164 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.crt with IP's: []
	I0315 21:16:32.228690   11164 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.crt ...
	I0315 21:16:32.228763   11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.crt: {Name:mk6cbb1c106aa2dec99a9338908a5ea76d5206ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:16:32.230290   11164 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.key ...
	I0315 21:16:32.230290   11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.key: {Name:mk5c3038fe2a59bd4ebdf1cb320d733f3de9b70e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:16:32.243236   11164 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812.pem (1338 bytes)
	W0315 21:16:32.243866   11164 certs.go:397] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812_empty.pem, impossibly tiny 0 bytes
	I0315 21:16:32.244089   11164 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0315 21:16:32.244671   11164 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0315 21:16:32.245081   11164 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0315 21:16:32.245162   11164 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0315 21:16:32.245850   11164 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem (1708 bytes)
	I0315 21:16:32.248063   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0315 21:16:32.321659   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 21:16:32.402505   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 21:16:32.491666   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0315 21:16:32.579600   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 21:16:32.651879   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 21:16:32.716051   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 21:16:32.797235   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0315 21:16:32.885295   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812.pem --> /usr/share/ca-certificates/8812.pem (1338 bytes)
	I0315 21:16:32.963869   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem --> /usr/share/ca-certificates/88122.pem (1708 bytes)
	I0315 21:16:33.029503   11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 21:16:33.108304   11164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 21:16:33.169580   11164 ssh_runner.go:195] Run: openssl version
	I0315 21:16:33.195467   11164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 21:16:33.230164   11164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 21:16:31.017074    1332 pod_ready.go:92] pod "kube-apiserver-pause-073300" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:31.017074    1332 pod_ready.go:81] duration metric: took 408.6175ms waiting for pod "kube-apiserver-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:31.017074    1332 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:31.395349    1332 pod_ready.go:92] pod "kube-controller-manager-pause-073300" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:31.395349    1332 pod_ready.go:81] duration metric: took 378.275ms waiting for pod "kube-controller-manager-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:31.395349    1332 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m4md5" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:31.792495    1332 pod_ready.go:92] pod "kube-proxy-m4md5" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:31.792495    1332 pod_ready.go:81] duration metric: took 397.1476ms waiting for pod "kube-proxy-m4md5" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:31.792495    1332 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:32.219569    1332 pod_ready.go:92] pod "kube-scheduler-pause-073300" in "kube-system" namespace has status "Ready":"True"
	I0315 21:16:32.220120    1332 pod_ready.go:81] duration metric: took 427.0739ms waiting for pod "kube-scheduler-pause-073300" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:32.220120    1332 pod_ready.go:38] duration metric: took 2.0850147s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 21:16:32.220120    1332 api_server.go:51] waiting for apiserver process to appear ...
	I0315 21:16:32.232971    1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 21:16:32.332638    1332 api_server.go:71] duration metric: took 2.8648801s to wait for apiserver process to appear ...
	I0315 21:16:32.332638    1332 api_server.go:87] waiting for apiserver healthz status ...
	I0315 21:16:32.332638    1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
	I0315 21:16:32.362918    1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 200:
	ok
	I0315 21:16:32.430820    1332 api_server.go:140] control plane version: v1.26.2
	I0315 21:16:32.430820    1332 api_server.go:130] duration metric: took 98.1819ms to wait for apiserver health ...
	I0315 21:16:32.430820    1332 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 21:16:32.455349    1332 system_pods.go:59] 6 kube-system pods found
	I0315 21:16:32.455486    1332 system_pods.go:61] "coredns-787d4945fb-2q246" [13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc] Running
	I0315 21:16:32.455486    1332 system_pods.go:61] "etcd-pause-073300" [08b62e5b-2e8e-45a6-976f-51c9524724a0] Running
	I0315 21:16:32.455544    1332 system_pods.go:61] "kube-apiserver-pause-073300" [f7f5b883-f6de-4ad7-adc7-c48ad03ab3c0] Running
	I0315 21:16:32.455642    1332 system_pods.go:61] "kube-controller-manager-pause-073300" [2691065d-e6be-4ff6-902d-6d474453c5e9] Running
	I0315 21:16:32.455642    1332 system_pods.go:61] "kube-proxy-m4md5" [428ae579-2b68-4526-a2b0-d8bb5922870f] Running
	I0315 21:16:32.455685    1332 system_pods.go:61] "kube-scheduler-pause-073300" [0cdbd626-152a-47fb-a2d9-08d22e639996] Running
	I0315 21:16:32.455785    1332 system_pods.go:74] duration metric: took 24.9239ms to wait for pod list to return data ...
	I0315 21:16:32.455785    1332 default_sa.go:34] waiting for default service account to be created ...
	I0315 21:16:32.637154    1332 default_sa.go:45] found service account: "default"
	I0315 21:16:32.637301    1332 default_sa.go:55] duration metric: took 181.4813ms for default service account to be created ...
	I0315 21:16:32.637301    1332 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 21:16:32.844031    1332 system_pods.go:86] 6 kube-system pods found
	I0315 21:16:32.844031    1332 system_pods.go:89] "coredns-787d4945fb-2q246" [13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc] Running
	I0315 21:16:32.844031    1332 system_pods.go:89] "etcd-pause-073300" [08b62e5b-2e8e-45a6-976f-51c9524724a0] Running
	I0315 21:16:32.844031    1332 system_pods.go:89] "kube-apiserver-pause-073300" [f7f5b883-f6de-4ad7-adc7-c48ad03ab3c0] Running
	I0315 21:16:32.844031    1332 system_pods.go:89] "kube-controller-manager-pause-073300" [2691065d-e6be-4ff6-902d-6d474453c5e9] Running
	I0315 21:16:32.844031    1332 system_pods.go:89] "kube-proxy-m4md5" [428ae579-2b68-4526-a2b0-d8bb5922870f] Running
	I0315 21:16:32.844031    1332 system_pods.go:89] "kube-scheduler-pause-073300" [0cdbd626-152a-47fb-a2d9-08d22e639996] Running
	I0315 21:16:32.844031    1332 system_pods.go:126] duration metric: took 206.7296ms to wait for k8s-apps to be running ...
	I0315 21:16:32.844031    1332 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 21:16:32.858698    1332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 21:16:32.902525    1332 system_svc.go:56] duration metric: took 56.9493ms WaitForService to wait for kubelet.
	I0315 21:16:32.902598    1332 kubeadm.go:578] duration metric: took 3.4348415s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0315 21:16:32.902669    1332 node_conditions.go:102] verifying NodePressure condition ...
	I0315 21:16:33.016156    1332 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0315 21:16:33.016241    1332 node_conditions.go:123] node cpu capacity is 16
	I0315 21:16:33.016278    1332 node_conditions.go:105] duration metric: took 113.5716ms to run NodePressure ...
	I0315 21:16:33.016316    1332 start.go:228] waiting for startup goroutines ...
	I0315 21:16:33.016316    1332 start.go:233] waiting for cluster config update ...
	I0315 21:16:33.016351    1332 start.go:242] writing updated cluster config ...
	I0315 21:16:33.039378    1332 ssh_runner.go:195] Run: rm -f paused
	I0315 21:16:33.289071    1332 start.go:555] kubectl: 1.18.2, cluster: 1.26.2 (minor skew: 8)
	I0315 21:16:33.292949    1332 out.go:177] 
	W0315 21:16:33.295479    1332 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.26.2.
	I0315 21:16:33.297706    1332 out.go:177]   - Want kubectl v1.26.2? Try 'minikube kubectl -- get pods -A'
	I0315 21:16:33.301501    1332 out.go:177] * Done! kubectl is now configured to use "pause-073300" cluster and "default" namespace by default
	I0315 21:16:33.717595    3304 system_pods.go:86] 6 kube-system pods found
	I0315 21:16:33.717595    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:33.717595    3304 system_pods.go:89] "etcd-old-k8s-version-103800" [177eccf1-ef20-41f5-9031-eca4485bea7b] Running
	I0315 21:16:33.717595    3304 system_pods.go:89] "kube-controller-manager-old-k8s-version-103800" [eaf30ba4-8812-46a0-a046-aa376656a6eb] Pending
	I0315 21:16:33.717595    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:33.717595    3304 system_pods.go:89] "kube-scheduler-old-k8s-version-103800" [2c673315-0d1e-4a5d-a5d7-738e38d7cf84] Pending
	I0315 21:16:33.717595    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:33.717595    3304 retry.go:31] will retry after 7.396011667s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I0315 21:16:31.527682    4576 ssh_runner.go:235] Completed: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": (1.7684448s)
	I0315 21:16:31.527682    4576 ops.go:34] apiserver oom_adj: -16
	I0315 21:16:31.527682    4576 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.26.2/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=11fd2e5d7d4b8360c6d8a8b2c2e61a071aa8631e minikube.k8s.io/name=no-preload-470000 minikube.k8s.io/updated_at=2023_03_15T21_16_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (1.7435955s)
	I0315 21:16:31.528138    4576 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.26.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (1.7509547s)
	I0315 21:16:31.546907    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:32.651879    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:33.157563    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:33.663575    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:34.656851    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:35.154601    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:35.655087    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:33.243154   11164 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 15 19:59 /usr/share/ca-certificates/minikubeCA.pem
	I0315 21:16:33.255159   11164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 21:16:33.304401   11164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 21:16:33.376002   11164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8812.pem && ln -fs /usr/share/ca-certificates/8812.pem /etc/ssl/certs/8812.pem"
	I0315 21:16:33.437551   11164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8812.pem
	I0315 21:16:33.457067   11164 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 15 20:10 /usr/share/ca-certificates/8812.pem
	I0315 21:16:33.472683   11164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8812.pem
	I0315 21:16:33.512180   11164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8812.pem /etc/ssl/certs/51391683.0"
	I0315 21:16:33.594619   11164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88122.pem && ln -fs /usr/share/ca-certificates/88122.pem /etc/ssl/certs/88122.pem"
	I0315 21:16:33.678203   11164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88122.pem
	I0315 21:16:33.719266   11164 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 15 20:10 /usr/share/ca-certificates/88122.pem
	I0315 21:16:33.747747   11164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88122.pem
	I0315 21:16:33.801073   11164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/88122.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 21:16:33.858449   11164 kubeadm.go:401] StartCluster: {Name:embed-certs-348900 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:embed-certs-348900 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0315 21:16:33.875455   11164 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0315 21:16:33.983958   11164 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0315 21:16:34.106643   11164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 21:16:34.166229   11164 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0315 21:16:34.191715   11164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 21:16:34.244848   11164 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 21:16:34.245009   11164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0315 21:16:34.448526   11164 kubeadm.go:322] W0315 21:16:34.443079    1446 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0315 21:16:34.611048   11164 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0315 21:16:34.911105   11164 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 21:16:36.155261    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:36.653892    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:37.161298    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:37.652599    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:38.154702    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:38.645135    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:39.164169    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:39.658552    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:40.157502    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:41.157705    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:42.163009    4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 21:16:43.340102    4576 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.1770953s)
	I0315 21:16:43.340102    4576 kubeadm.go:1073] duration metric: took 13.5808883s to wait for elevateKubeSystemPrivileges.
	I0315 21:16:43.340102    4576 kubeadm.go:403] StartCluster complete in 50.2426818s
	I0315 21:16:43.340102    4576 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:16:43.341117    4576 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0315 21:16:43.344404    4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 21:16:43.346496    4576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0315 21:16:43.346496    4576 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0315 21:16:43.346496    4576 addons.go:66] Setting storage-provisioner=true in profile "no-preload-470000"
	I0315 21:16:43.347047    4576 addons.go:228] Setting addon storage-provisioner=true in "no-preload-470000"
	I0315 21:16:43.347047    4576 addons.go:66] Setting default-storageclass=true in profile "no-preload-470000"
	I0315 21:16:43.347047    4576 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-470000"
	I0315 21:16:43.347221    4576 host.go:66] Checking if "no-preload-470000" exists ...
	I0315 21:16:43.347249    4576 config.go:182] Loaded profile config "no-preload-470000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0315 21:16:43.379427    4576 cli_runner.go:164] Run: docker container inspect no-preload-470000 --format={{.State.Status}}
	I0315 21:16:43.381213    4576 cli_runner.go:164] Run: docker container inspect no-preload-470000 --format={{.State.Status}}
	I0315 21:16:43.745668    4576 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 21:16:41.133563    3304 system_pods.go:86] 7 kube-system pods found
	I0315 21:16:41.133563    3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
	I0315 21:16:41.133686    3304 system_pods.go:89] "etcd-old-k8s-version-103800" [177eccf1-ef20-41f5-9031-eca4485bea7b] Running
	I0315 21:16:41.133686    3304 system_pods.go:89] "kube-apiserver-old-k8s-version-103800" [2bad5a6b-39e8-46ef-8bd8-d1571bdfb33d] Pending
	I0315 21:16:41.133686    3304 system_pods.go:89] "kube-controller-manager-old-k8s-version-103800" [eaf30ba4-8812-46a0-a046-aa376656a6eb] Running
	I0315 21:16:41.133781    3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
	I0315 21:16:41.133781    3304 system_pods.go:89] "kube-scheduler-old-k8s-version-103800" [2c673315-0d1e-4a5d-a5d7-738e38d7cf84] Running
	I0315 21:16:41.133781    3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
	I0315 21:16:41.133781    3304 retry.go:31] will retry after 8.389208299s: missing components: kube-apiserver
	I0315 21:16:43.747702    4576 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 21:16:43.748309    4576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 21:16:43.757168    4576 addons.go:228] Setting addon default-storageclass=true in "no-preload-470000"
	I0315 21:16:43.757927    4576 host.go:66] Checking if "no-preload-470000" exists ...
	I0315 21:16:43.767770    4576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-470000
	I0315 21:16:43.791366    4576 cli_runner.go:164] Run: docker container inspect no-preload-470000 --format={{.State.Status}}
	I0315 21:16:44.188975    4576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65272 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\no-preload-470000\id_rsa Username:docker}
	I0315 21:16:44.217518    4576 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 21:16:44.217590    4576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 21:16:44.244690    4576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-470000
	I0315 21:16:44.434385    4576 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-470000" context rescaled to 1 replicas
	I0315 21:16:44.434385    4576 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0315 21:16:44.439077    4576 out.go:177] * Verifying Kubernetes components...
	I0315 21:16:44.472892    4576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 21:16:44.602535    4576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65272 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\no-preload-470000\id_rsa Username:docker}
	I0315 21:16:44.646754    4576 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.3002612s)
	I0315 21:16:44.647304    4576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0315 21:16:44.663208    4576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-470000
	I0315 21:16:44.860417    4576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 21:16:45.012884    4576 node_ready.go:35] waiting up to 6m0s for node "no-preload-470000" to be "Ready" ...
	I0315 21:16:45.053358    4576 node_ready.go:49] node "no-preload-470000" has status "Ready":"True"
	I0315 21:16:45.053358    4576 node_ready.go:38] duration metric: took 40.4219ms waiting for node "no-preload-470000" to be "Ready" ...
	I0315 21:16:45.053358    4576 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 21:16:45.161915    4576 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-vlgxh" in "kube-system" namespace to be "Ready" ...
	I0315 21:16:45.558149    4576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2023-03-15 21:13:03 UTC, end at Wed 2023-03-15 21:16:48 UTC. --
	Mar 15 21:15:16 pause-073300 dockerd[5130]: time="2023-03-15T21:15:16.627341500Z" level=info msg="Loading containers: start."
	Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.180814100Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.293764400Z" level=info msg="Loading containers: done."
	Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.403670900Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.403801700Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.403820400Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.403829500Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.403946800Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.404077100Z" level=info msg="Daemon has completed initialization"
	Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.495876500Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Mar 15 21:15:17 pause-073300 systemd[1]: Started Docker Application Container Engine.
	Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.517552200Z" level=info msg="API listen on [::]:2376"
	Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.543627500Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.744692100Z" level=info msg="ignoring event" container=923853eff8e2f1864e6cfeaaffa94363f41b1b6d4244613c11e443d63b83f2f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.744884600Z" level=info msg="ignoring event" container=51f04c53d355992b4720b6fe3fb08eeebaffdc34d08262d17db9f24dc486c5f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.839172700Z" level=info msg="ignoring event" container=c2ad60cad36db8cde30e0a93c9255fa18e5df353a41dd6259afeb2043222ac62 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.840438600Z" level=info msg="ignoring event" container=e92b1a5d6d0c83422026888e04b4103fbb1a6aad2a814bd916a79bec7e5cb8d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.853642900Z" level=info msg="ignoring event" container=a35da045d30f2532ff1a5d88e989615ddf33df4f90272696757ca1b38c1a5eba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.927068700Z" level=info msg="ignoring event" container=ed67a04efb8ec818ab6782a05f9c291801a4458a1a0233c184aaf80f6bd8a373 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.927810400Z" level=info msg="ignoring event" container=95e8431f84471d1685f5d908a022789eb2644a61f5292997dfe306c1e9821c27 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 15 21:15:47 pause-073300 dockerd[5130]: time="2023-03-15T21:15:47.033930300Z" level=info msg="ignoring event" container=e722cf7eda6bbc9bcf453efc486e10336872ccd7d74dbeb91e51085c094b0009 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 15 21:15:47 pause-073300 dockerd[5130]: time="2023-03-15T21:15:47.128698500Z" level=info msg="ignoring event" container=1f51fce69c226f17529256ccf645edbf972854fc5f36bf524dd8bb1a98d65d9a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 15 21:15:47 pause-073300 dockerd[5130]: time="2023-03-15T21:15:47.434269500Z" level=info msg="ignoring event" container=6824568445c66b1f085e714f1a98df4ca1f40f4f7f67ed8f6069fbde15fd4b87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 15 21:15:51 pause-073300 dockerd[5130]: time="2023-03-15T21:15:51.189996200Z" level=info msg="ignoring event" container=e3043962e5ef540d703084ce9ddfc5f027eaab5ffceeeadfdff71e94f0eee0ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 15 21:15:55 pause-073300 dockerd[5130]: time="2023-03-15T21:15:55.079374900Z" level=info msg="ignoring event" container=0cb5567e32abb23418b668dfb851f2300e7fd6400791daeca39d46d8cf78cb16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	c3986aec6e000       5185b96f0becf       22 seconds ago       Running             coredns                   2                   a5bac8046c295
	b7b4669a56d5c       6f64e7135a6ec       24 seconds ago       Running             kube-proxy                2                   0e90c4b9c88b9
	aba41f11fdc83       fce326961ae2d       48 seconds ago       Running             etcd                      2                   f6e4108617808
	571d485669178       db8f409d9a5d7       48 seconds ago       Running             kube-scheduler            2                   cc13660f35478
	e6bb3d9a35ff0       240e201d5b0d8       48 seconds ago       Running             kube-controller-manager   3                   c468745ca2cf5
	88f9444587356       63d3239c3c159       48 seconds ago       Running             kube-apiserver            3                   5496303bf33fe
	e3043962e5ef5       5185b96f0becf       About a minute ago   Exited              coredns                   1                   51f04c53d3559
	6824568445c66       fce326961ae2d       About a minute ago   Exited              etcd                      1                   a35da045d30f2
	95e8431f84471       db8f409d9a5d7       About a minute ago   Exited              kube-scheduler            1                   923853eff8e2f
	1f51fce69c226       240e201d5b0d8       About a minute ago   Exited              kube-controller-manager   2                   e722cf7eda6bb
	c2ad60cad36db       6f64e7135a6ec       About a minute ago   Exited              kube-proxy                1                   e92b1a5d6d0c8
	0cb5567e32abb       63d3239c3c159       About a minute ago   Exited              kube-apiserver            2                   ed67a04efb8ec
	
	* 
	* ==> coredns [c3986aec6e00] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 8846d9ca81164c00fa03e78dfcf1a6846552cc49335bc010218794b8cfaf537759aa4b596e7dc20c0f618e8eb07603c0139662b99dfa3de45b176fbe7fb57ce1
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:39857 - 53557 "HINFO IN 4117550418294164078.6192551117797702913. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0986876s
	
	* 
	* ==> coredns [e3043962e5ef] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8846d9ca81164c00fa03e78dfcf1a6846552cc49335bc010218794b8cfaf537759aa4b596e7dc20c0f618e8eb07603c0139662b99dfa3de45b176fbe7fb57ce1
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:58165 - 40858 "HINFO IN 6114658028450402923.1632777775304523244. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0560197s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-073300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-073300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=11fd2e5d7d4b8360c6d8a8b2c2e61a071aa8631e
	                    minikube.k8s.io/name=pause-073300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_03_15T21_14_05_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 15 Mar 2023 21:13:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-073300
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 15 Mar 2023 21:16:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 15 Mar 2023 21:16:13 +0000   Wed, 15 Mar 2023 21:13:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 15 Mar 2023 21:16:13 +0000   Wed, 15 Mar 2023 21:13:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 15 Mar 2023 21:16:13 +0000   Wed, 15 Mar 2023 21:13:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 15 Mar 2023 21:16:13 +0000   Wed, 15 Mar 2023 21:14:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-073300
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 b1932dc991aa41bd806e459062926d45
	  System UUID:                b1932dc991aa41bd806e459062926d45
	  Boot ID:                    c49fbee3-0cdd-49eb-8984-31df821a263f
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.2
	  Kube-Proxy Version:         v1.26.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-787d4945fb-2q246                100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     2m32s
	  kube-system                 etcd-pause-073300                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         2m49s
	  kube-system                 kube-apiserver-pause-073300             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	  kube-system                 kube-controller-manager-pause-073300    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m50s
	  kube-system                 kube-proxy-m4md5                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 kube-scheduler-pause-073300             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m24s                  kube-proxy       
	  Normal  Starting                 23s                    kube-proxy       
	  Normal  NodeHasSufficientPID     3m19s (x7 over 3m20s)  kubelet          Node pause-073300 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    3m19s (x8 over 3m20s)  kubelet          Node pause-073300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  3m19s (x8 over 3m20s)  kubelet          Node pause-073300 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m44s                  kubelet          Node pause-073300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m44s                  kubelet          Node pause-073300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m44s                  kubelet          Node pause-073300 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m43s                  kubelet          Node pause-073300 status is now: NodeNotReady
	  Normal  NodeReady                2m42s                  kubelet          Node pause-073300 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  2m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m33s                  node-controller  Node pause-073300 event: Registered Node pause-073300 in Controller
	  Normal  Starting                 50s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     49s (x7 over 49s)      kubelet          Node pause-073300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  49s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  48s (x8 over 49s)      kubelet          Node pause-073300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s (x8 over 49s)      kubelet          Node pause-073300 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           21s                    node-controller  Node pause-073300 event: Registered Node pause-073300 in Controller
	
	* 
	* ==> dmesg <==
	* [Mar15 20:45] WSL2: Performing memory compaction.
	[Mar15 20:47] WSL2: Performing memory compaction.
	[Mar15 20:48] WSL2: Performing memory compaction.
	[Mar15 20:49] WSL2: Performing memory compaction.
	[Mar15 20:51] WSL2: Performing memory compaction.
	[Mar15 20:52] WSL2: Performing memory compaction.
	[Mar15 20:53] WSL2: Performing memory compaction.
	[Mar15 20:54] WSL2: Performing memory compaction.
	[Mar15 20:56] WSL2: Performing memory compaction.
	[Mar15 20:57] WSL2: Performing memory compaction.
	[Mar15 20:58] WSL2: Performing memory compaction.
	[Mar15 20:59] WSL2: Performing memory compaction.
	[Mar15 21:00] WSL2: Performing memory compaction.
	[Mar15 21:01] WSL2: Performing memory compaction.
	[Mar15 21:03] WSL2: Performing memory compaction.
	[ +24.007152] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Mar15 21:04] process 'docker/tmp/qemu-check145175011/check' started with executable stack
	[ +21.555954] WSL2: Performing memory compaction.
	[Mar15 21:06] WSL2: Performing memory compaction.
	[Mar15 21:07] hrtimer: interrupt took 920300 ns
	[Mar15 21:09] WSL2: Performing memory compaction.
	[Mar15 21:11] WSL2: Performing memory compaction.
	[Mar15 21:12] WSL2: Performing memory compaction.
	[Mar15 21:13] WSL2: Performing memory compaction.
	[Mar15 21:15] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [6824568445c6] <==
	* {"level":"info","ts":"2023-03-15T21:15:44.027Z","caller":"traceutil/trace.go:171","msg":"trace[2137636385] transaction","detail":"{read_only:false; number_of_response:1; response_revision:415; }","duration":"100.9434ms","start":"2023-03-15T21:15:43.926Z","end":"2023-03-15T21:15:44.027Z","steps":["trace[2137636385] 'process raft request'  (duration: 100.5034ms)"],"step_count":1}
	{"level":"warn","ts":"2023-03-15T21:15:44.540Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.6806ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873768454336989569 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/kube-apiserver-nzsucgtdly32izejp7ytxjrkii\" mod_revision:412 > success:<request_put:<key:\"/registry/leases/kube-system/kube-apiserver-nzsucgtdly32izejp7ytxjrkii\" value_size:582 >> failure:<request_range:<key:\"/registry/leases/kube-system/kube-apiserver-nzsucgtdly32izejp7ytxjrkii\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-03-15T21:15:44.541Z","caller":"traceutil/trace.go:171","msg":"trace[493093877] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"109.7451ms","start":"2023-03-15T21:15:44.431Z","end":"2023-03-15T21:15:44.541Z","steps":["trace[493093877] 'process raft request'  (duration: 109.4963ms)"],"step_count":1}
	{"level":"info","ts":"2023-03-15T21:15:44.542Z","caller":"traceutil/trace.go:171","msg":"trace[455019656] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"198.5194ms","start":"2023-03-15T21:15:44.343Z","end":"2023-03-15T21:15:44.542Z","steps":["trace[455019656] 'process raft request'  (duration: 87.4089ms)","trace[455019656] 'compare'  (duration: 106.3186ms)"],"step_count":2}
	{"level":"info","ts":"2023-03-15T21:15:44.542Z","caller":"traceutil/trace.go:171","msg":"trace[1743432337] linearizableReadLoop","detail":"{readStateIndex:444; appliedIndex:443; }","duration":"112.8874ms","start":"2023-03-15T21:15:44.430Z","end":"2023-03-15T21:15:44.542Z","steps":["trace[1743432337] 'read index received'  (duration: 852.1µs)","trace[1743432337] 'applied index is now lower than readState.Index'  (duration: 112.0303ms)"],"step_count":2}
	{"level":"warn","ts":"2023-03-15T21:15:44.544Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"113.3137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-node-lease\" ","response":"range_response_count:1 size:363"}
	{"level":"info","ts":"2023-03-15T21:15:44.545Z","caller":"traceutil/trace.go:171","msg":"trace[83833859] range","detail":"{range_begin:/registry/namespaces/kube-node-lease; range_end:; response_count:1; response_revision:418; }","duration":"115.03ms","start":"2023-03-15T21:15:44.429Z","end":"2023-03-15T21:15:44.545Z","steps":["trace[83833859] 'agreement among raft nodes before linearized reading'  (duration: 113.2035ms)"],"step_count":1}
	{"level":"warn","ts":"2023-03-15T21:15:44.545Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"111.3129ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
	{"level":"info","ts":"2023-03-15T21:15:44.545Z","caller":"traceutil/trace.go:171","msg":"trace[1382087029] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:418; }","duration":"111.3651ms","start":"2023-03-15T21:15:44.434Z","end":"2023-03-15T21:15:44.545Z","steps":["trace[1382087029] 'agreement among raft nodes before linearized reading'  (duration: 111.2411ms)"],"step_count":1}
	{"level":"warn","ts":"2023-03-15T21:15:44.547Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"113.4412ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:118"}
	{"level":"info","ts":"2023-03-15T21:15:44.547Z","caller":"traceutil/trace.go:171","msg":"trace[1257507815] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:418; }","duration":"113.4898ms","start":"2023-03-15T21:15:44.434Z","end":"2023-03-15T21:15:44.547Z","steps":["trace[1257507815] 'agreement among raft nodes before linearized reading'  (duration: 113.3486ms)"],"step_count":1}
	{"level":"info","ts":"2023-03-15T21:15:44.956Z","caller":"traceutil/trace.go:171","msg":"trace[1166219815] linearizableReadLoop","detail":"{readStateIndex:447; appliedIndex:446; }","duration":"121.4317ms","start":"2023-03-15T21:15:44.835Z","end":"2023-03-15T21:15:44.956Z","steps":["trace[1166219815] 'read index received'  (duration: 3.7558ms)","trace[1166219815] 'applied index is now lower than readState.Index'  (duration: 117.6698ms)"],"step_count":2}
	{"level":"info","ts":"2023-03-15T21:15:44.956Z","caller":"traceutil/trace.go:171","msg":"trace[513205189] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"125.7589ms","start":"2023-03-15T21:15:44.830Z","end":"2023-03-15T21:15:44.956Z","steps":["trace[513205189] 'process raft request'  (duration: 94.9828ms)","trace[513205189] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/pods/kube-system/kube-proxy-m4md5; req_size:4522; } (duration: 27.8113ms)"],"step_count":2}
	{"level":"warn","ts":"2023-03-15T21:15:44.957Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"121.7804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" ","response":"range_response_count:1 size:442"}
	{"level":"info","ts":"2023-03-15T21:15:44.958Z","caller":"traceutil/trace.go:171","msg":"trace[1937091289] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:1; response_revision:421; }","duration":"123.6279ms","start":"2023-03-15T21:15:44.835Z","end":"2023-03-15T21:15:44.958Z","steps":["trace[1937091289] 'agreement among raft nodes before linearized reading'  (duration: 121.5636ms)"],"step_count":1}
	{"level":"warn","ts":"2023-03-15T21:15:44.965Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"129.7433ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" ","response":"range_response_count:64 size:57899"}
	{"level":"info","ts":"2023-03-15T21:15:44.965Z","caller":"traceutil/trace.go:171","msg":"trace[1243225417] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:64; response_revision:421; }","duration":"129.8213ms","start":"2023-03-15T21:15:44.835Z","end":"2023-03-15T21:15:44.965Z","steps":["trace[1243225417] 'agreement among raft nodes before linearized reading'  (duration: 123.3525ms)"],"step_count":1}
	{"level":"info","ts":"2023-03-15T21:15:46.132Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-03-15T21:15:46.132Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-073300","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
	WARNING: 2023/03/15 21:15:46 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2023/03/15 21:15:46 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2023-03-15T21:15:46.436Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f23060b075c4c089","current-leader-member-id":"f23060b075c4c089"}
	{"level":"info","ts":"2023-03-15T21:15:46.534Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2023-03-15T21:15:46.538Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2023-03-15T21:15:46.538Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-073300","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
	
	* 
	* ==> etcd [aba41f11fdc8] <==
	* {"level":"info","ts":"2023-03-15T21:16:07.246Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2023-03-15T21:16:07.246Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-03-15T21:16:07.246Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-15T21:16:07.246Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-15T21:16:07.246Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2023-03-15T21:16:07.246Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2023-03-15T21:16:07.247Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-03-15T21:16:07.247Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-03-15T21:16:07.244Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-03-15T21:16:07.326Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-03-15T21:16:07.326Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 3"}
	{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 4"}
	{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 4"}
	{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 4"}
	{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 4"}
	{"level":"info","ts":"2023-03-15T21:16:09.138Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-15T21:16:09.139Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:pause-073300 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2023-03-15T21:16:09.139Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-15T21:16:09.143Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-03-15T21:16:09.144Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-03-15T21:16:09.147Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-03-15T21:16:09.147Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.103.2:2379"}
	
	* 
	* ==> kernel <==
	*  21:16:49 up  1:24,  0 users,  load average: 15.02, 10.62, 6.77
	Linux pause-073300 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [0cb5567e32ab] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 21:15:54.852783       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 21:15:55.001054       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 21:15:55.018769       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [88f944458735] <==
	* I0315 21:16:13.321430       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0315 21:16:13.321719       1 shared_informer.go:273] Waiting for caches to sync for cluster_authentication_trust_controller
	I0315 21:16:13.322696       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0315 21:16:13.320670       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0315 21:16:13.320231       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0315 21:16:13.324414       1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister
	I0315 21:16:13.437824       1 shared_informer.go:280] Caches are synced for configmaps
	I0315 21:16:13.525354       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0315 21:16:13.623881       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0315 21:16:13.624222       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0315 21:16:13.624252       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0315 21:16:13.624258       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0315 21:16:13.624333       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0315 21:16:13.625322       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0315 21:16:13.625384       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0315 21:16:13.625410       1 cache.go:39] Caches are synced for autoregister controller
	I0315 21:16:13.630897       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0315 21:16:14.357698       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0315 21:16:16.572417       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0315 21:16:16.602561       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0315 21:16:16.951884       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0315 21:16:17.136478       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0315 21:16:17.246459       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0315 21:16:28.244694       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0315 21:16:28.342519       1 controller.go:615] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [1f51fce69c22] <==
	* I0315 21:15:33.346996       1 serving.go:348] Generated self-signed cert in-memory
	I0315 21:15:39.060876       1 controllermanager.go:182] Version: v1.26.2
	I0315 21:15:39.061047       1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 21:15:39.072013       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0315 21:15:39.072120       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0315 21:15:39.072625       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0315 21:15:39.072677       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	* 
	* ==> kube-controller-manager [e6bb3d9a35ff] <==
	* I0315 21:16:28.124587       1 shared_informer.go:280] Caches are synced for cidrallocator
	I0315 21:16:28.124592       1 shared_informer.go:280] Caches are synced for crt configmap
	I0315 21:16:28.124598       1 shared_informer.go:280] Caches are synced for endpoint
	I0315 21:16:28.124661       1 shared_informer.go:280] Caches are synced for HPA
	I0315 21:16:28.124898       1 shared_informer.go:280] Caches are synced for GC
	I0315 21:16:28.124186       1 shared_informer.go:280] Caches are synced for taint
	I0315 21:16:28.125247       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0315 21:16:28.125313       1 taint_manager.go:211] "Sending events to api server"
	I0315 21:16:28.125358       1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone: 
	W0315 21:16:28.125464       1 node_lifecycle_controller.go:1053] Missing timestamp for Node pause-073300. Assuming now as a timestamp.
	I0315 21:16:28.125524       1 node_lifecycle_controller.go:1254] Controller detected that zone  is now in state Normal.
	I0315 21:16:28.126198       1 event.go:294] "Event occurred" object="pause-073300" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-073300 event: Registered Node pause-073300 in Controller"
	I0315 21:16:28.126964       1 shared_informer.go:280] Caches are synced for stateful set
	I0315 21:16:28.227084       1 shared_informer.go:280] Caches are synced for namespace
	I0315 21:16:28.227137       1 shared_informer.go:280] Caches are synced for disruption
	I0315 21:16:28.227298       1 shared_informer.go:280] Caches are synced for deployment
	I0315 21:16:28.227547       1 shared_informer.go:280] Caches are synced for ReplicaSet
	I0315 21:16:28.227631       1 shared_informer.go:280] Caches are synced for service account
	I0315 21:16:28.229520       1 shared_informer.go:273] Waiting for caches to sync for garbage collector
	I0315 21:16:28.233560       1 shared_informer.go:280] Caches are synced for resource quota
	I0315 21:16:28.236781       1 shared_informer.go:280] Caches are synced for resource quota
	I0315 21:16:28.529112       1 event.go:294] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
	I0315 21:16:28.534472       1 shared_informer.go:280] Caches are synced for garbage collector
	I0315 21:16:28.562852       1 shared_informer.go:280] Caches are synced for garbage collector
	I0315 21:16:28.562973       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [b7b4669a56d5] <==
	* I0315 21:16:25.942265       1 node.go:163] Successfully retrieved node IP: 192.168.103.2
	I0315 21:16:25.944192       1 server_others.go:109] "Detected node IP" address="192.168.103.2"
	I0315 21:16:25.944360       1 server_others.go:535] "Using iptables proxy"
	I0315 21:16:26.134212       1 server_others.go:176] "Using iptables Proxier"
	I0315 21:16:26.134360       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0315 21:16:26.134376       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0315 21:16:26.134395       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0315 21:16:26.134427       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 21:16:26.135408       1 server.go:655] "Version info" version="v1.26.2"
	I0315 21:16:26.135540       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 21:16:26.136322       1 config.go:317] "Starting service config controller"
	I0315 21:16:26.136477       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0315 21:16:26.136504       1 config.go:226] "Starting endpoint slice config controller"
	I0315 21:16:26.136526       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0315 21:16:26.136357       1 config.go:444] "Starting node config controller"
	I0315 21:16:26.137498       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0315 21:16:26.236790       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0315 21:16:26.238214       1 shared_informer.go:280] Caches are synced for node config
	I0315 21:16:26.238275       1 shared_informer.go:280] Caches are synced for service config
	
	* 
	* ==> kube-proxy [c2ad60cad36d] <==
	* E0315 21:15:29.627155       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-073300": dial tcp 192.168.103.2:8443: connect: connection refused
	E0315 21:15:30.826046       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-073300": dial tcp 192.168.103.2:8443: connect: connection refused
	E0315 21:15:43.235847       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-073300": net/http: TLS handshake timeout
	
	* 
	* ==> kube-scheduler [571d48566917] <==
	* I0315 21:16:07.677853       1 serving.go:348] Generated self-signed cert in-memory
	I0315 21:16:13.656832       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.2"
	I0315 21:16:13.656978       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 21:16:13.756221       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0315 21:16:13.756343       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0315 21:16:13.758353       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0315 21:16:13.758370       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0315 21:16:13.759778       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0315 21:16:13.759904       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0315 21:16:13.757625       1 shared_informer.go:273] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0315 21:16:13.758377       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0315 21:16:13.924166       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0315 21:16:13.924382       1 shared_informer.go:280] Caches are synced for RequestHeaderAuthRequestController
	I0315 21:16:13.924585       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [95e8431f8447] <==
	* I0315 21:15:34.052612       1 serving.go:348] Generated self-signed cert in-memory
	W0315 21:15:44.136305       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0315 21:15:44.140386       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 21:15:44.225673       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0315 21:15:44.225720       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0315 21:15:44.445561       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.2"
	I0315 21:15:44.445741       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 21:15:44.453477       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0315 21:15:44.455841       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0315 21:15:44.456010       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0315 21:15:44.456059       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0315 21:15:44.925804       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0315 21:15:46.348879       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0315 21:15:46.350010       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0315 21:15:46.352703       1 scheduling_queue.go:1065] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	I0315 21:15:46.355076       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0315 21:15:46.355314       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2023-03-15 21:13:03 UTC, end at Wed 2023-03-15 21:16:49 UTC. --
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.766354    7548 kubelet_node_status.go:73] "Successfully registered node" node="pause-073300"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.826254    7548 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.828960    7548 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.830844    7548 apiserver.go:52] "Watching apiserver"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.846713    7548 topology_manager.go:210] "Topology Admit Handler"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.846988    7548 topology_manager.go:210] "Topology Admit Handler"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.925554    7548 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.944245    7548 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/428ae579-2b68-4526-a2b0-d8bb5922870f-kube-proxy\") pod \"kube-proxy-m4md5\" (UID: \"428ae579-2b68-4526-a2b0-d8bb5922870f\") " pod="kube-system/kube-proxy-m4md5"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.944547    7548 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/428ae579-2b68-4526-a2b0-d8bb5922870f-xtables-lock\") pod \"kube-proxy-m4md5\" (UID: \"428ae579-2b68-4526-a2b0-d8bb5922870f\") " pod="kube-system/kube-proxy-m4md5"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.944610    7548 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/428ae579-2b68-4526-a2b0-d8bb5922870f-lib-modules\") pod \"kube-proxy-m4md5\" (UID: \"428ae579-2b68-4526-a2b0-d8bb5922870f\") " pod="kube-system/kube-proxy-m4md5"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.944669    7548 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7vbb\" (UniqueName: \"kubernetes.io/projected/428ae579-2b68-4526-a2b0-d8bb5922870f-kube-api-access-b7vbb\") pod \"kube-proxy-m4md5\" (UID: \"428ae579-2b68-4526-a2b0-d8bb5922870f\") " pod="kube-system/kube-proxy-m4md5"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.945094    7548 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc-config-volume\") pod \"coredns-787d4945fb-2q246\" (UID: \"13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc\") " pod="kube-system/coredns-787d4945fb-2q246"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.945520    7548 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbnj9\" (UniqueName: \"kubernetes.io/projected/13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc-kube-api-access-mbnj9\") pod \"coredns-787d4945fb-2q246\" (UID: \"13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc\") " pod="kube-system/coredns-787d4945fb-2q246"
	Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.945563    7548 reconciler.go:41] "Reconciler: start to sync state"
	Mar 15 21:16:14 pause-073300 kubelet[7548]: I0315 21:16:14.149192    7548 scope.go:115] "RemoveContainer" containerID="c2ad60cad36db8cde30e0a93c9255fa18e5df353a41dd6259afeb2043222ac62"
	Mar 15 21:16:14 pause-073300 kubelet[7548]: I0315 21:16:14.150324    7548 scope.go:115] "RemoveContainer" containerID="e3043962e5ef540d703084ce9ddfc5f027eaab5ffceeeadfdff71e94f0eee0ce"
	Mar 15 21:16:14 pause-073300 kubelet[7548]: E0315 21:16:14.154149    7548 kuberuntime_manager.go:872] container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.9.3,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mbnj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropaga
tion:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},S
tdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod coredns-787d4945fb-2q246_kube-system(13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
	Mar 15 21:16:14 pause-073300 kubelet[7548]: E0315 21:16:14.154342    7548 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-787d4945fb-2q246" podUID=13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc
	Mar 15 21:16:14 pause-073300 kubelet[7548]: E0315 21:16:14.154347    7548 kuberuntime_manager.go:872] container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.26.2,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},Vol
umeMount{Name:kube-api-access-b7vbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-m4md5_kube-system(428ae579-2b68-4526-a2b0-d8bb5922870f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
	Mar 15 21:16:14 pause-073300 kubelet[7548]: E0315 21:16:14.155707    7548 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-m4md5" podUID=428ae579-2b68-4526-a2b0-d8bb5922870f
	Mar 15 21:16:14 pause-073300 kubelet[7548]: I0315 21:16:14.763783    7548 scope.go:115] "RemoveContainer" containerID="e3043962e5ef540d703084ce9ddfc5f027eaab5ffceeeadfdff71e94f0eee0ce"
	Mar 15 21:16:14 pause-073300 kubelet[7548]: E0315 21:16:14.768377    7548 kuberuntime_manager.go:872] container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.9.3,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mbnj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropaga
tion:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},S
tdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod coredns-787d4945fb-2q246_kube-system(13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
	Mar 15 21:16:14 pause-073300 kubelet[7548]: E0315 21:16:14.768684    7548 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-787d4945fb-2q246" podUID=13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc
	Mar 15 21:16:25 pause-073300 kubelet[7548]: I0315 21:16:25.248648    7548 scope.go:115] "RemoveContainer" containerID="c2ad60cad36db8cde30e0a93c9255fa18e5df353a41dd6259afeb2043222ac62"
	Mar 15 21:16:27 pause-073300 kubelet[7548]: I0315 21:16:27.246680    7548 scope.go:115] "RemoveContainer" containerID="e3043962e5ef540d703084ce9ddfc5f027eaab5ffceeeadfdff71e94f0eee0ce"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-073300 -n pause-073300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-073300 -n pause-073300: (2.5376001s)
helpers_test.go:261: (dbg) Run:  kubectl --context pause-073300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (134.75s)

                                                
                                    

Test pass (279/305)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 10.61
4 TestDownloadOnly/v1.16.0/preload-exists 0.07
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.67
10 TestDownloadOnly/v1.26.2/json-events 8.98
11 TestDownloadOnly/v1.26.2/preload-exists 0
14 TestDownloadOnly/v1.26.2/kubectl 0
15 TestDownloadOnly/v1.26.2/LogsDuration 0.71
16 TestDownloadOnly/DeleteAll 2.92
17 TestDownloadOnly/DeleteAlwaysSucceeds 1.95
18 TestDownloadOnlyKic 5.77
19 TestBinaryMirror 4.81
20 TestOffline 216.62
22 TestAddons/Setup 509.99
26 TestAddons/parallel/MetricsServer 8.81
27 TestAddons/parallel/HelmTiller 44.51
29 TestAddons/parallel/CSI 115.29
30 TestAddons/parallel/Headlamp 33.58
31 TestAddons/parallel/CloudSpanner 8.11
34 TestAddons/serial/GCPAuth/Namespaces 0.79
35 TestAddons/StoppedEnableDisable 14.85
36 TestCertOptions 116.77
37 TestCertExpiration 351.51
38 TestDockerFlags 115.62
39 TestForceSystemdFlag 195.6
40 TestForceSystemdEnv 124.31
45 TestErrorSpam/setup 87.36
46 TestErrorSpam/start 7.23
47 TestErrorSpam/status 6.2
48 TestErrorSpam/pause 5.19
49 TestErrorSpam/unpause 6.12
50 TestErrorSpam/stop 17.49
53 TestFunctional/serial/CopySyncFile 0.03
54 TestFunctional/serial/StartWithProxy 106.83
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 54.26
57 TestFunctional/serial/KubeContext 0.21
58 TestFunctional/serial/KubectlGetPods 0.36
61 TestFunctional/serial/CacheCmd/cache/add_remote 8.78
62 TestFunctional/serial/CacheCmd/cache/add_local 4.32
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.31
64 TestFunctional/serial/CacheCmd/cache/list 0.31
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 1.54
66 TestFunctional/serial/CacheCmd/cache/cache_reload 6.99
67 TestFunctional/serial/CacheCmd/cache/delete 0.61
68 TestFunctional/serial/MinikubeKubectlCmd 0.59
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 2.56
70 TestFunctional/serial/ExtraConfig 73.37
71 TestFunctional/serial/ComponentHealth 0.29
72 TestFunctional/serial/LogsCmd 3.3
73 TestFunctional/serial/LogsFileCmd 3.91
75 TestFunctional/parallel/ConfigCmd 2.06
77 TestFunctional/parallel/DryRun 5.12
78 TestFunctional/parallel/InternationalLanguage 2.06
79 TestFunctional/parallel/StatusCmd 6.62
84 TestFunctional/parallel/AddonsCmd 1.11
85 TestFunctional/parallel/PersistentVolumeClaim 153.65
87 TestFunctional/parallel/SSHCmd 3.6
88 TestFunctional/parallel/CpCmd 6.77
89 TestFunctional/parallel/MySQL 103.75
90 TestFunctional/parallel/FileSync 1.67
91 TestFunctional/parallel/CertSync 11.57
95 TestFunctional/parallel/NodeLabels 0.27
97 TestFunctional/parallel/NonActiveRuntimeDisabled 1.59
99 TestFunctional/parallel/License 1.78
100 TestFunctional/parallel/Version/short 0.43
101 TestFunctional/parallel/Version/components 3.07
102 TestFunctional/parallel/ImageCommands/ImageListShort 1.24
103 TestFunctional/parallel/ImageCommands/ImageListTable 1.51
104 TestFunctional/parallel/ImageCommands/ImageListJson 1.71
105 TestFunctional/parallel/ImageCommands/ImageListYaml 1.32
106 TestFunctional/parallel/ImageCommands/ImageBuild 14.48
107 TestFunctional/parallel/ImageCommands/Setup 2.86
108 TestFunctional/parallel/ServiceCmd/DeployApp 32.82
109 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 17.98
111 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
113 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 38.6
114 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 6.61
115 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 15.84
116 TestFunctional/parallel/ServiceCmd/List 2.43
117 TestFunctional/parallel/ServiceCmd/JSONOutput 1.84
118 TestFunctional/parallel/ServiceCmd/HTTPS 15.03
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.35
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
125 TestFunctional/parallel/ImageCommands/ImageSaveToFile 6.26
126 TestFunctional/parallel/ImageCommands/ImageRemove 3.5
127 TestFunctional/parallel/ServiceCmd/Format 15.04
128 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 12.59
129 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 11.46
130 TestFunctional/parallel/ServiceCmd/URL 15.03
131 TestFunctional/parallel/DockerEnv/powershell 7.43
132 TestFunctional/parallel/UpdateContextCmd/no_changes 1.06
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.97
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 1.02
135 TestFunctional/parallel/ProfileCmd/profile_not_create 3.27
136 TestFunctional/parallel/ProfileCmd/profile_list 2.4
137 TestFunctional/parallel/ProfileCmd/profile_json_output 2.32
138 TestFunctional/delete_addon-resizer_images 1.14
139 TestFunctional/delete_my-image_image 0.23
140 TestFunctional/delete_minikube_cached_images 0.25
144 TestImageBuild/serial/NormalBuild 4.77
145 TestImageBuild/serial/BuildWithBuildArg 6.92
146 TestImageBuild/serial/BuildWithDockerIgnore 1.89
147 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.51
150 TestIngressAddonLegacy/StartLegacyK8sCluster 108.52
152 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 62.77
153 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 2.06
157 TestJSONOutput/start/Command 106.03
158 TestJSONOutput/start/Audit 0
160 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/pause/Command 2.15
164 TestJSONOutput/pause/Audit 0
166 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/unpause/Command 1.99
170 TestJSONOutput/unpause/Audit 0
172 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/stop/Command 13.93
176 TestJSONOutput/stop/Audit 0
178 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
180 TestErrorJSONOutput 1.81
182 TestKicCustomNetwork/create_custom_network 94.63
183 TestKicCustomNetwork/use_default_bridge_network 94.08
184 TestKicExistingNetwork 95.24
185 TestKicCustomSubnet 95.62
186 TestKicStaticIP 96.28
187 TestMainNoArgs 0.28
188 TestMinikubeProfile 207.35
191 TestMountStart/serial/StartWithMountFirst 25.28
192 TestMountStart/serial/VerifyMountFirst 1.46
193 TestMountStart/serial/StartWithMountSecond 24.08
194 TestMountStart/serial/VerifyMountSecond 1.47
195 TestMountStart/serial/DeleteFirst 4.93
196 TestMountStart/serial/VerifyMountPostDelete 1.44
197 TestMountStart/serial/Stop 3.02
198 TestMountStart/serial/RestartStopped 16.63
199 TestMountStart/serial/VerifyMountPostStop 1.49
202 TestMultiNode/serial/FreshStart2Nodes 186.5
203 TestMultiNode/serial/DeployApp2Nodes 28.42
204 TestMultiNode/serial/PingHostFrom2Pods 3.69
205 TestMultiNode/serial/AddNode 66.89
206 TestMultiNode/serial/ProfileList 1.78
207 TestMultiNode/serial/CopyFile 52.64
208 TestMultiNode/serial/StopNode 8.56
209 TestMultiNode/serial/StartAfterStop 30.09
210 TestMultiNode/serial/RestartKeepsNodes 142.15
211 TestMultiNode/serial/DeleteNode 16.58
212 TestMultiNode/serial/StopMultiNode 26.79
213 TestMultiNode/serial/RestartMultiNode 89.35
214 TestMultiNode/serial/ValidateNameConflict 98.26
218 TestPreload 234.11
219 TestScheduledStopWindows 162.4
223 TestInsufficientStorage 58.7
224 TestRunningBinaryUpgrade 281.49
226 TestKubernetesUpgrade 336.6
227 TestMissingContainerUpgrade 317.19
228 TestStoppedBinaryUpgrade/Setup 0.58
230 TestNoKubernetes/serial/StartNoK8sWithVersion 0.48
231 TestNoKubernetes/serial/StartWithK8s 180.52
232 TestStoppedBinaryUpgrade/Upgrade 309.21
233 TestNoKubernetes/serial/StartWithStopK8s 49.11
234 TestNoKubernetes/serial/Start 33.87
235 TestNoKubernetes/serial/VerifyK8sNotRunning 1.6
236 TestNoKubernetes/serial/ProfileList 10.1
237 TestNoKubernetes/serial/Stop 3.48
238 TestNoKubernetes/serial/StartNoArgs 17.89
239 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 1.62
247 TestStoppedBinaryUpgrade/MinikubeLogs 6.93
249 TestPause/serial/Start 139.44
262 TestStartStop/group/old-k8s-version/serial/FirstStart 186.25
264 TestStartStop/group/no-preload/serial/FirstStart 184.27
267 TestStartStop/group/embed-certs/serial/FirstStart 140.6
268 TestStartStop/group/old-k8s-version/serial/DeployApp 14.96
269 TestStartStop/group/no-preload/serial/DeployApp 17.95
270 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3.37
272 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 135.04
273 TestStartStop/group/old-k8s-version/serial/Stop 15.69
274 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 4.11
275 TestStartStop/group/no-preload/serial/Stop 15.77
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 1.36
277 TestStartStop/group/old-k8s-version/serial/SecondStart 444.55
278 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 2.17
279 TestStartStop/group/no-preload/serial/SecondStart 431.63
280 TestStartStop/group/embed-certs/serial/DeployApp 16.86
281 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.79
282 TestStartStop/group/embed-certs/serial/Stop 17.62
283 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 1.91
284 TestStartStop/group/embed-certs/serial/SecondStart 370.54
285 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 12.18
286 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.68
287 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.96
288 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 1.25
289 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 710.43
290 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 61.1
291 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 54.08
292 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.05
293 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.68
294 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 1.98
295 TestStartStop/group/old-k8s-version/serial/Pause 14.63
297 TestStartStop/group/newest-cni/serial/FirstStart 140.68
298 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.75
299 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.69
300 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 2.1
301 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 2.1
302 TestStartStop/group/embed-certs/serial/Pause 18.86
303 TestStartStop/group/no-preload/serial/Pause 16.37
304 TestNetworkPlugins/group/auto/Start 116.77
305 TestNetworkPlugins/group/calico/Start 268.14
306 TestStartStop/group/newest-cni/serial/DeployApp 0
307 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.31
308 TestStartStop/group/newest-cni/serial/Stop 14.46
309 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 1.23
310 TestStartStop/group/newest-cni/serial/SecondStart 64.77
311 TestNetworkPlugins/group/auto/KubeletFlags 1.77
312 TestNetworkPlugins/group/auto/NetCatPod 29.96
313 TestNetworkPlugins/group/auto/DNS 0.59
314 TestNetworkPlugins/group/auto/Localhost 0.55
315 TestNetworkPlugins/group/auto/HairPin 0.59
316 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
317 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
318 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 2.35
319 TestStartStop/group/newest-cni/serial/Pause 16.92
320 TestNetworkPlugins/group/custom-flannel/Start 140.22
321 TestNetworkPlugins/group/false/Start 117.05
322 TestNetworkPlugins/group/calico/ControllerPod 5.06
323 TestNetworkPlugins/group/calico/KubeletFlags 1.61
324 TestNetworkPlugins/group/calico/NetCatPod 28.01
325 TestNetworkPlugins/group/calico/DNS 0.72
326 TestNetworkPlugins/group/calico/Localhost 0.65
327 TestNetworkPlugins/group/calico/HairPin 0.58
328 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.06
329 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.57
330 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 1.85
331 TestStartStop/group/default-k8s-diff-port/serial/Pause 12.69
332 TestNetworkPlugins/group/custom-flannel/KubeletFlags 1.6
333 TestNetworkPlugins/group/custom-flannel/NetCatPod 58.88
334 TestNetworkPlugins/group/kindnet/Start 159.77
335 TestNetworkPlugins/group/false/KubeletFlags 2
336 TestNetworkPlugins/group/false/NetCatPod 46.11
337 TestNetworkPlugins/group/custom-flannel/DNS 0.65
338 TestNetworkPlugins/group/false/DNS 0.7
339 TestNetworkPlugins/group/custom-flannel/Localhost 0.79
340 TestNetworkPlugins/group/false/Localhost 0.7
341 TestNetworkPlugins/group/flannel/Start 177
342 TestNetworkPlugins/group/custom-flannel/HairPin 0.62
343 TestNetworkPlugins/group/false/HairPin 0.73
344 TestNetworkPlugins/group/enable-default-cni/Start 130.71
345 TestNetworkPlugins/group/bridge/Start 112.82
346 TestNetworkPlugins/group/kindnet/ControllerPod 5.05
347 TestNetworkPlugins/group/kindnet/KubeletFlags 1.53
348 TestNetworkPlugins/group/kindnet/NetCatPod 48.96
349 TestNetworkPlugins/group/kindnet/DNS 0.64
350 TestNetworkPlugins/group/kindnet/Localhost 0.52
351 TestNetworkPlugins/group/kindnet/HairPin 0.59
352 TestNetworkPlugins/group/flannel/ControllerPod 5.05
353 TestNetworkPlugins/group/flannel/KubeletFlags 1.74
354 TestNetworkPlugins/group/flannel/NetCatPod 27.04
355 TestNetworkPlugins/group/flannel/DNS 0.54
356 TestNetworkPlugins/group/flannel/Localhost 0.56
357 TestNetworkPlugins/group/flannel/HairPin 0.61
358 TestNetworkPlugins/group/bridge/KubeletFlags 1.63
359 TestNetworkPlugins/group/bridge/NetCatPod 37.27
360 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 2.13
361 TestNetworkPlugins/group/enable-default-cni/NetCatPod 41.26
362 TestNetworkPlugins/group/bridge/DNS 7.41
363 TestNetworkPlugins/group/bridge/Localhost 0.52
364 TestNetworkPlugins/group/kubenet/Start 119.43
365 TestNetworkPlugins/group/bridge/HairPin 1.08
366 TestNetworkPlugins/group/enable-default-cni/DNS 0.67
367 TestNetworkPlugins/group/enable-default-cni/Localhost 0.53
368 TestNetworkPlugins/group/enable-default-cni/HairPin 0.6
369 TestNetworkPlugins/group/kubenet/KubeletFlags 1.43
370 TestNetworkPlugins/group/kubenet/NetCatPod 26.04
371 TestNetworkPlugins/group/kubenet/DNS 0.55
372 TestNetworkPlugins/group/kubenet/Localhost 0.49
373 TestNetworkPlugins/group/kubenet/HairPin 0.47
x
+
TestDownloadOnly/v1.16.0/json-events (10.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-385200 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-385200 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker: (10.6096391s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (10.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-385200
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-385200: exit status 85 (666.0906ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-385200 | minikube1\jenkins | v1.29.0 | 15 Mar 23 19:56 UTC |          |
	|         | -p download-only-385200        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/15 19:56:13
	Running on machine: minikube1
	Binary: Built with gc go1.20.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 19:56:13.860262    9684 out.go:296] Setting OutFile to fd 652 ...
	I0315 19:56:13.931557    9684 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0315 19:56:13.931557    9684 out.go:309] Setting ErrFile to fd 656...
	I0315 19:56:13.931557    9684 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0315 19:56:13.942448    9684 root.go:312] Error reading config file at C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0315 19:56:13.963739    9684 out.go:303] Setting JSON to true
	I0315 19:56:13.967760    9684 start.go:125] hostinfo: {"hostname":"minikube1","uptime":19576,"bootTime":1678890597,"procs":144,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2728 Build 19045.2728","kernelVersion":"10.0.19045.2728 Build 19045.2728","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0315 19:56:13.967760    9684 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0315 19:56:14.290922    9684 out.go:97] [download-only-385200] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2728 Build 19045.2728
	W0315 19:56:14.291336    9684 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0315 19:56:14.291336    9684 notify.go:220] Checking for updates...
	I0315 19:56:14.297816    9684 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0315 19:56:14.358069    9684 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0315 19:56:14.401080    9684 out.go:169] MINIKUBE_LOCATION=16056
	I0315 19:56:14.458702    9684 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0315 19:56:14.495414    9684 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0315 19:56:14.496453    9684 driver.go:365] Setting default libvirt URI to qemu:///system
	I0315 19:56:14.813149    9684 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0315 19:56:14.823305    9684 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 19:56:15.644185    9684 info.go:266] docker info: {ID:5XVN:YLWI:D57U:VRY6:Z2T2:XT44:UTQY:SUTG:X4EL:3KBQ:R56A:SLJU Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:47 SystemTime:2023-03-15 19:56:15.000607 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0315 19:56:15.852065    9684 out.go:97] Using the docker driver based on user configuration
	I0315 19:56:15.853173    9684 start.go:296] selected driver: docker
	I0315 19:56:15.853173    9684 start.go:857] validating driver "docker" against <nil>
	I0315 19:56:15.871833    9684 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 19:56:16.722746    9684 info.go:266] docker info: {ID:5XVN:YLWI:D57U:VRY6:Z2T2:XT44:UTQY:SUTG:X4EL:3KBQ:R56A:SLJU Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:47 SystemTime:2023-03-15 19:56:16.0596498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path:C:\Program Files\Docker\cli-plu
gins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0315 19:56:16.723630    9684 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0315 19:56:16.869565    9684 start_flags.go:386] Using suggested 16300MB memory alloc based on sys=65534MB, container=51405MB
	I0315 19:56:16.870300    9684 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0315 19:56:16.919896    9684 out.go:169] Using Docker Desktop driver with root privileges
	I0315 19:56:16.922657    9684 cni.go:84] Creating CNI manager for ""
	I0315 19:56:16.923214    9684 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0315 19:56:16.923214    9684 start_flags.go:319] config:
	{Name:download-only-385200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-385200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0315 19:56:16.925996    9684 out.go:97] Starting control plane node download-only-385200 in cluster download-only-385200
	I0315 19:56:16.925996    9684 cache.go:120] Beginning downloading kic base image for docker with docker
	I0315 19:56:16.928102    9684 out.go:97] Pulling base image ...
	I0315 19:56:16.928102    9684 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0315 19:56:16.928102    9684 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f in local docker daemon
	I0315 19:56:16.976338    9684 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0315 19:56:16.976408    9684 cache.go:57] Caching tarball of preloaded images
	I0315 19:56:16.976472    9684 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0315 19:56:16.980063    9684 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0315 19:56:16.980063    9684 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0315 19:56:17.040441    9684 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0315 19:56:17.159560    9684 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f to local cache
	I0315 19:56:17.159694    9684 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f.tar -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.37-1678473806-15991@sha256_c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f.tar
	I0315 19:56:17.159694    9684 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f.tar -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.37-1678473806-15991@sha256_c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f.tar
	I0315 19:56:17.159694    9684 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f in local cache directory
	I0315 19:56:17.161061    9684 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f to local cache
	I0315 19:56:20.978968    9684 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0315 19:56:20.980164    9684 preload.go:256] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0315 19:56:22.133807    9684 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0315 19:56:22.135074    9684 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-385200\config.json ...
	I0315 19:56:22.135548    9684 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-385200\config.json: {Name:mk54be084a68f189f3e9388f282e01fec2069503 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 19:56:22.136806    9684 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0315 19:56:22.138241    9684 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/windows/amd64/kubectl.exe.sha1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\windows\amd64\v1.16.0/kubectl.exe
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-385200"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/json-events (8.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-385200 --force --alsologtostderr --kubernetes-version=v1.26.2 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-385200 --force --alsologtostderr --kubernetes-version=v1.26.2 --container-runtime=docker --driver=docker: (8.9775194s)
--- PASS: TestDownloadOnly/v1.26.2/json-events (8.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/preload-exists
--- PASS: TestDownloadOnly/v1.26.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/kubectl
--- PASS: TestDownloadOnly/v1.26.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/LogsDuration (0.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-385200
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-385200: exit status 85 (710.7101ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-385200 | minikube1\jenkins | v1.29.0 | 15 Mar 23 19:56 UTC |          |
	|         | -p download-only-385200        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	| start   | -o=json --download-only        | download-only-385200 | minikube1\jenkins | v1.29.0 | 15 Mar 23 19:56 UTC |          |
	|         | -p download-only-385200        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.26.2   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/15 19:56:25
	Running on machine: minikube1
	Binary: Built with gc go1.20.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 19:56:25.206131    8868 out.go:296] Setting OutFile to fd 656 ...
	I0315 19:56:25.272197    8868 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0315 19:56:25.272197    8868 out.go:309] Setting ErrFile to fd 660...
	I0315 19:56:25.272550    8868 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0315 19:56:25.283615    8868 root.go:312] Error reading config file at C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0315 19:56:25.298676    8868 out.go:303] Setting JSON to true
	I0315 19:56:25.302784    8868 start.go:125] hostinfo: {"hostname":"minikube1","uptime":19587,"bootTime":1678890597,"procs":145,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2728 Build 19045.2728","kernelVersion":"10.0.19045.2728 Build 19045.2728","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0315 19:56:25.302896    8868 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0315 19:56:25.307357    8868 out.go:97] [download-only-385200] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2728 Build 19045.2728
	I0315 19:56:25.307357    8868 notify.go:220] Checking for updates...
	I0315 19:56:25.310385    8868 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0315 19:56:25.314297    8868 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0315 19:56:25.317336    8868 out.go:169] MINIKUBE_LOCATION=16056
	I0315 19:56:25.319829    8868 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0315 19:56:25.325743    8868 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0315 19:56:25.326805    8868 config.go:182] Loaded profile config "download-only-385200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0315 19:56:25.327276    8868 start.go:765] api.Load failed for download-only-385200: filestore "download-only-385200": Docker machine "download-only-385200" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0315 19:56:25.327484    8868 driver.go:365] Setting default libvirt URI to qemu:///system
	W0315 19:56:25.327484    8868 start.go:765] api.Load failed for download-only-385200: filestore "download-only-385200": Docker machine "download-only-385200" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0315 19:56:25.663084    8868 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0315 19:56:25.672530    8868 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 19:56:26.497034    8868 info.go:266] docker info: {ID:5XVN:YLWI:D57U:VRY6:Z2T2:XT44:UTQY:SUTG:X4EL:3KBQ:R56A:SLJU Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:47 SystemTime:2023-03-15 19:56:25.8409745 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path:C:\Program Files\Docker\cli-plu
gins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0315 19:56:26.804433    8868 out.go:97] Using the docker driver based on existing profile
	I0315 19:56:26.804433    8868 start.go:296] selected driver: docker
	I0315 19:56:26.804433    8868 start.go:857] validating driver "docker" against &{Name:download-only-385200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-385200 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP:}
	I0315 19:56:26.823974    8868 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 19:56:27.696748    8868 info.go:266] docker info: {ID:5XVN:YLWI:D57U:VRY6:Z2T2:XT44:UTQY:SUTG:X4EL:3KBQ:R56A:SLJU Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:47 SystemTime:2023-03-15 19:56:27.0330617 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path:C:\Program Files\Docker\cli-plu
gins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0315 19:56:27.747129    8868 cni.go:84] Creating CNI manager for ""
	I0315 19:56:27.747129    8868 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0315 19:56:27.747129    8868 start_flags.go:319] config:
	{Name:download-only-385200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:download-only-385200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0315 19:56:27.750597    8868 out.go:97] Starting control plane node download-only-385200 in cluster download-only-385200
	I0315 19:56:27.750597    8868 cache.go:120] Beginning downloading kic base image for docker with docker
	I0315 19:56:27.753139    8868 out.go:97] Pulling base image ...
	I0315 19:56:27.753139    8868 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0315 19:56:27.753139    8868 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f in local docker daemon
	I0315 19:56:27.792402    8868 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.2/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4
	I0315 19:56:27.792402    8868 cache.go:57] Caching tarball of preloaded images
	I0315 19:56:27.792402    8868 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0315 19:56:27.796279    8868 out.go:97] Downloading Kubernetes v1.26.2 preload ...
	I0315 19:56:27.796360    8868 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 ...
	I0315 19:56:27.855789    8868 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.2/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4?checksum=md5:f7b26d32aaabacae8612fb9b9e1a4b89 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4
	I0315 19:56:27.990901    8868 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f to local cache
	I0315 19:56:27.990901    8868 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f.tar -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.37-1678473806-15991@sha256_c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f.tar
	I0315 19:56:27.990901    8868 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f.tar -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.37-1678473806-15991@sha256_c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f.tar
	I0315 19:56:27.990901    8868 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f in local cache directory
	I0315 19:56:27.992184    8868 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f in local cache directory, skipping pull
	I0315 19:56:27.992272    8868 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f exists in cache, skipping pull
	I0315 19:56:27.992469    8868 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-385200"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.2/LogsDuration (0.71s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (2.92s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (2.9195636s)
--- PASS: TestDownloadOnly/DeleteAll (2.92s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (1.95s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-385200
aaa_download_only_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-385200: (1.9513787s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (1.95s)

                                                
                                    
x
+
TestDownloadOnlyKic (5.77s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-745000 --alsologtostderr --driver=docker
aaa_download_only_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-745000 --alsologtostderr --driver=docker: (2.6607082s)
helpers_test.go:175: Cleaning up "download-docker-745000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-745000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-745000: (1.7304536s)
--- PASS: TestDownloadOnlyKic (5.77s)

                                                
                                    
x
+
TestBinaryMirror (4.81s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-633800 --alsologtostderr --binary-mirror http://127.0.0.1:61078 --driver=docker
aaa_download_only_test.go:308: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-633800 --alsologtostderr --binary-mirror http://127.0.0.1:61078 --driver=docker: (2.7837138s)
helpers_test.go:175: Cleaning up "binary-mirror-633800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-633800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p binary-mirror-633800: (1.7766038s)
--- PASS: TestBinaryMirror (4.81s)

                                                
                                    
x
+
TestOffline (216.62s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-050900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-050900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (3m21.330897s)
helpers_test.go:175: Cleaning up "offline-docker-050900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-050900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-050900: (15.2891048s)
--- PASS: TestOffline (216.62s)

                                                
                                    
x
+
TestAddons/Setup (509.99s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-553600 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-553600 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (8m29.9929731s)
--- PASS: TestAddons/Setup (509.99s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (8.81s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 53.0214ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-g8kwm" [1820d610-a389-4a83-8399-b4cfbfcfd049] Running
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.1012039s
addons_test.go:380: (dbg) Run:  kubectl --context addons-553600 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-553600 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:397: (dbg) Done: out/minikube-windows-amd64.exe -p addons-553600 addons disable metrics-server --alsologtostderr -v=1: (3.3676951s)
--- PASS: TestAddons/parallel/MetricsServer (8.81s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (44.51s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 6.9914ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-nml8h" [95d70cba-6be6-46d7-b0ae-291a87ae34ec] Running
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.060237s
addons_test.go:438: (dbg) Run:  kubectl --context addons-553600 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:438: (dbg) Done: kubectl --context addons-553600 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (36.8851346s)
addons_test.go:455: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-553600 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:455: (dbg) Done: out/minikube-windows-amd64.exe -p addons-553600 addons disable helm-tiller --alsologtostderr -v=1: (2.5486931s)
--- PASS: TestAddons/parallel/HelmTiller (44.51s)

                                                
                                    
x
+
TestAddons/parallel/CSI (115.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 58.1789ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-553600 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:529: (dbg) Done: kubectl --context addons-553600 create -f testdata\csi-hostpath-driver\pvc.yaml: (2.7285605s)
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-553600 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:539: (dbg) Done: kubectl --context addons-553600 create -f testdata\csi-hostpath-driver\pv-pod.yaml: (3.3170513s)
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5e8e2d1f-ed11-4b12-b820-f579a55570d0] Pending
helpers_test.go:344: "task-pv-pod" [5e8e2d1f-ed11-4b12-b820-f579a55570d0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5e8e2d1f-ed11-4b12-b820-f579a55570d0] Running
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 47.1548808s
addons_test.go:549: (dbg) Run:  kubectl --context addons-553600 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-553600 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-553600 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-553600 delete pod task-pv-pod
addons_test.go:559: (dbg) Done: kubectl --context addons-553600 delete pod task-pv-pod: (4.0156726s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-553600 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-553600 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-553600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-553600 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d4d4b378-b1fc-4855-b187-ca3a3a3c22e5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d4d4b378-b1fc-4855-b187-ca3a3a3c22e5] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.0359655s
addons_test.go:591: (dbg) Run:  kubectl --context addons-553600 delete pod task-pv-pod-restore
addons_test.go:591: (dbg) Done: kubectl --context addons-553600 delete pod task-pv-pod-restore: (2.1888821s)
addons_test.go:595: (dbg) Run:  kubectl --context addons-553600 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-553600 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-553600 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-windows-amd64.exe -p addons-553600 addons disable csi-hostpath-driver --alsologtostderr -v=1: (9.9944726s)
addons_test.go:607: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-553600 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:607: (dbg) Done: out/minikube-windows-amd64.exe -p addons-553600 addons disable volumesnapshots --alsologtostderr -v=1: (3.3219535s)
--- PASS: TestAddons/parallel/CSI (115.29s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (33.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-553600 --alsologtostderr -v=1
addons_test.go:789: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-553600 --alsologtostderr -v=1: (3.5079257s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-65b68b7c6f-t7xtg" [d6487a7e-f011-4187-a78e-4dc8d3c38acd] Pending
helpers_test.go:344: "headlamp-65b68b7c6f-t7xtg" [d6487a7e-f011-4187-a78e-4dc8d3c38acd] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-65b68b7c6f-t7xtg" [d6487a7e-f011-4187-a78e-4dc8d3c38acd] Running
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 30.0638409s
--- PASS: TestAddons/parallel/Headlamp (33.58s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (8.11s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-58d646969f-ps4nl" [efad6113-e1aa-4d3b-b7d4-85cf9f41fce9] Running
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.1013138s
addons_test.go:813: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-553600
addons_test.go:813: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-553600: (2.947344s)
--- PASS: TestAddons/parallel/CloudSpanner (8.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.79s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-553600 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-553600 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.79s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (14.85s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-553600
addons_test.go:147: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-553600: (13.7495007s)
addons_test.go:151: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-553600
addons_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-553600
--- PASS: TestAddons/StoppedEnableDisable (14.85s)

                                                
                                    
x
+
TestCertOptions (116.77s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-298900 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-298900 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (1m39.8275016s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-298900 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-298900 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (1.5360106s)
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-298900 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-298900 -- "sudo cat /etc/kubernetes/admin.conf": (1.8393566s)
helpers_test.go:175: Cleaning up "cert-options-298900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-298900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-298900: (13.3125693s)
--- PASS: TestCertOptions (116.77s)

                                                
                                    
x
+
TestCertExpiration (351.51s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-023900 --memory=2048 --cert-expiration=3m --driver=docker
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-023900 --memory=2048 --cert-expiration=3m --driver=docker: (1m51.1268524s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-023900 --memory=2048 --cert-expiration=8760h --driver=docker
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-023900 --memory=2048 --cert-expiration=8760h --driver=docker: (42.003535s)
helpers_test.go:175: Cleaning up "cert-expiration-023900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-023900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-023900: (18.3596446s)
--- PASS: TestCertExpiration (351.51s)

                                                
                                    
x
+
TestDockerFlags (115.62s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-950300 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
E0315 21:11:13.099154    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
docker_test.go:45: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-950300 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (1m39.4352468s)
docker_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-950300 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:50: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-950300 ssh "sudo systemctl show docker --property=Environment --no-pager": (1.4391551s)
docker_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-950300 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:61: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-950300 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (1.5153311s)
helpers_test.go:175: Cleaning up "docker-flags-950300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-950300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-950300: (13.231158s)
--- PASS: TestDockerFlags (115.62s)

                                                
                                    
x
+
TestForceSystemdFlag (195.6s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-050900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker
docker_test.go:85: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-050900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (3m0.4004997s)
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-050900 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-050900 ssh "docker info --format {{.CgroupDriver}}": (1.7959514s)
helpers_test.go:175: Cleaning up "force-systemd-flag-050900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-050900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-050900: (13.4043836s)
--- PASS: TestForceSystemdFlag (195.60s)

                                                
                                    
x
+
TestForceSystemdEnv (124.31s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-387800 --memory=2048 --alsologtostderr -v=5 --driver=docker
docker_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-387800 --memory=2048 --alsologtostderr -v=5 --driver=docker: (1m51.6468827s)
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-387800 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-387800 ssh "docker info --format {{.CgroupDriver}}": (1.8449022s)
helpers_test.go:175: Cleaning up "force-systemd-env-387800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-387800
E0315 21:15:21.798857    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-387800: (10.8158708s)
--- PASS: TestForceSystemdEnv (124.31s)

                                                
                                    
x
+
TestErrorSpam/setup (87.36s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-105500 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-105500 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 --driver=docker: (1m27.3543405s)
error_spam_test.go:91: acceptable stderr: "! C:\\ProgramData\\chocolatey\\bin\\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.26.2."
--- PASS: TestErrorSpam/setup (87.36s)

                                                
                                    
x
+
TestErrorSpam/start (7.23s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 start --dry-run: (2.3419977s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 start --dry-run: (2.3844505s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 start --dry-run: (2.4980743s)
--- PASS: TestErrorSpam/start (7.23s)

                                                
                                    
x
+
TestErrorSpam/status (6.2s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 status: (2.5865168s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 status: (2.0002468s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 status: (1.6099641s)
--- PASS: TestErrorSpam/status (6.20s)

                                                
                                    
x
+
TestErrorSpam/pause (5.19s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 pause: (2.191055s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 pause: (1.4959726s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 pause: (1.4948394s)
--- PASS: TestErrorSpam/pause (5.19s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 unpause: (2.043746s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 unpause: (2.2929635s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 unpause: (1.7815911s)
--- PASS: TestErrorSpam/unpause (6.12s)

                                                
                                    
x
+
TestErrorSpam/stop (17.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 stop: (7.9910356s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 stop: (4.6321623s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-105500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-105500 stop: (4.8624365s)
--- PASS: TestErrorSpam/stop (17.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1850: local sync path: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\8812\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (106.83s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2229: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-919600 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
E0315 20:10:21.813199    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
E0315 20:10:21.827608    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
E0315 20:10:21.843410    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
E0315 20:10:21.874975    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
E0315 20:10:21.922276    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
E0315 20:10:22.016771    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
E0315 20:10:22.188332    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
E0315 20:10:22.519496    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
E0315 20:10:23.167414    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
E0315 20:10:24.462955    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
E0315 20:10:27.033735    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
E0315 20:10:32.157732    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
E0315 20:10:42.398287    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
E0315 20:11:02.888527    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
E0315 20:11:43.853245    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
functional_test.go:2229: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-919600 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (1m46.8245582s)
--- PASS: TestFunctional/serial/StartWithProxy (106.83s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (54.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-919600 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-919600 --alsologtostderr -v=8: (54.2560937s)
functional_test.go:658: soft start took 54.2574267s for "functional-919600" cluster.
--- PASS: TestFunctional/serial/SoftStart (54.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.21s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-919600 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (8.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 cache add k8s.gcr.io/pause:3.1
functional_test.go:1044: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 cache add k8s.gcr.io/pause:3.1: (2.9159673s)
functional_test.go:1044: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 cache add k8s.gcr.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 cache add k8s.gcr.io/pause:3.3: (2.6926556s)
functional_test.go:1044: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 cache add k8s.gcr.io/pause:latest
E0315 20:13:05.779103    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
functional_test.go:1044: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 cache add k8s.gcr.io/pause:latest: (3.1670269s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (8.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (4.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-919600 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1624599979\001
functional_test.go:1072: (dbg) Done: docker build -t minikube-local-cache-test:functional-919600 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1624599979\001: (1.4139499s)
functional_test.go:1084: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 cache add minikube-local-cache-test:functional-919600
functional_test.go:1084: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 cache add minikube-local-cache-test:functional-919600: (2.3042125s)
functional_test.go:1089: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 cache delete minikube-local-cache-test:functional-919600
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-919600
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (4.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1097: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 ssh sudo crictl images
functional_test.go:1119: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 ssh sudo crictl images: (1.541266s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (6.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1142: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 ssh sudo docker rmi k8s.gcr.io/pause:latest: (1.549361s)
functional_test.go:1148: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-919600 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (1.4897588s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 cache reload
functional_test.go:1153: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 cache reload: (2.4948606s)
functional_test.go:1158: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1158: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: (1.4569452s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (6.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.61s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 kubectl -- --context functional-919600 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.59s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (2.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out\kubectl.exe --context functional-919600 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (2.56s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (73.37s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-919600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:752: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-919600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m13.368081s)
functional_test.go:756: restart took 1m13.3684354s for "functional-919600" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (73.37s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-919600 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.29s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 logs
functional_test.go:1231: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 logs: (3.296297s)
--- PASS: TestFunctional/serial/LogsCmd (3.30s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.91s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3349755003\001\logs.txt
functional_test.go:1245: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3349755003\001\logs.txt: (3.9044576s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.91s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-919600 config get cpus: exit status 14 (299.6336ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-919600 config get cpus: exit status 14 (331.0056ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-919600 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:969: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-919600 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (2.076744s)

                                                
                                                
-- stdout --
	* [functional-919600] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2728 Build 19045.2728
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16056
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 20:16:13.215888    7712 out.go:296] Setting OutFile to fd 896 ...
	I0315 20:16:13.313433    7712 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0315 20:16:13.313433    7712 out.go:309] Setting ErrFile to fd 968...
	I0315 20:16:13.313433    7712 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0315 20:16:13.336445    7712 out.go:303] Setting JSON to false
	I0315 20:16:13.343463    7712 start.go:125] hostinfo: {"hostname":"minikube1","uptime":20775,"bootTime":1678890597,"procs":148,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2728 Build 19045.2728","kernelVersion":"10.0.19045.2728 Build 19045.2728","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0315 20:16:13.343463    7712 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0315 20:16:13.350455    7712 out.go:177] * [functional-919600] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2728 Build 19045.2728
	I0315 20:16:13.354436    7712 notify.go:220] Checking for updates...
	I0315 20:16:13.356438    7712 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0315 20:16:13.360088    7712 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 20:16:13.362455    7712 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0315 20:16:13.364456    7712 out.go:177]   - MINIKUBE_LOCATION=16056
	I0315 20:16:13.367431    7712 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 20:16:13.370454    7712 config.go:182] Loaded profile config "functional-919600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0315 20:16:13.372433    7712 driver.go:365] Setting default libvirt URI to qemu:///system
	I0315 20:16:13.797253    7712 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0315 20:16:13.814386    7712 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 20:16:14.920243    7712 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.1058596s)
	I0315 20:16:14.920243    7712 info.go:266] docker info: {ID:5XVN:YLWI:D57U:VRY6:Z2T2:XT44:UTQY:SUTG:X4EL:3KBQ:R56A:SLJU Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:55 SystemTime:2023-03-15 20:16:14.0327003 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path:C:\Program Files\Docker\cli-plu
gins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0315 20:16:14.925252    7712 out.go:177] * Using the docker driver based on existing profile
	I0315 20:16:14.927256    7712 start.go:296] selected driver: docker
	I0315 20:16:14.927256    7712 start.go:857] validating driver "docker" against &{Name:functional-919600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:functional-919600 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0315 20:16:14.927256    7712 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 20:16:15.018275    7712 out.go:177] 
	W0315 20:16:15.020260    7712 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0315 20:16:15.023283    7712 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-919600 --dry-run --alsologtostderr -v=1 --driver=docker
functional_test.go:986: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-919600 --dry-run --alsologtostderr -v=1 --driver=docker: (3.0464651s)
--- PASS: TestFunctional/parallel/DryRun (5.12s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-919600 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-919600 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (2.0557077s)

                                                
                                                
-- stdout --
	* [functional-919600] minikube v1.29.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.2728 Build 19045.2728
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16056
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 20:16:18.355517    3876 out.go:296] Setting OutFile to fd 1008 ...
	I0315 20:16:18.465506    3876 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0315 20:16:18.465506    3876 out.go:309] Setting ErrFile to fd 900...
	I0315 20:16:18.465506    3876 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0315 20:16:18.489513    3876 out.go:303] Setting JSON to false
	I0315 20:16:18.494514    3876 start.go:125] hostinfo: {"hostname":"minikube1","uptime":20781,"bootTime":1678890597,"procs":148,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2728 Build 19045.2728","kernelVersion":"10.0.19045.2728 Build 19045.2728","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0315 20:16:18.494514    3876 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0315 20:16:18.498504    3876 out.go:177] * [functional-919600] minikube v1.29.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.2728 Build 19045.2728
	I0315 20:16:18.501520    3876 notify.go:220] Checking for updates...
	I0315 20:16:18.504522    3876 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0315 20:16:18.506501    3876 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 20:16:18.509521    3876 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0315 20:16:18.511535    3876 out.go:177]   - MINIKUBE_LOCATION=16056
	I0315 20:16:18.515511    3876 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 20:16:18.518505    3876 config.go:182] Loaded profile config "functional-919600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0315 20:16:18.520523    3876 driver.go:365] Setting default libvirt URI to qemu:///system
	I0315 20:16:18.940513    3876 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0315 20:16:18.954511    3876 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0315 20:16:20.039422    3876 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.0849139s)
	I0315 20:16:20.040412    3876 info.go:266] docker info: {ID:5XVN:YLWI:D57U:VRY6:Z2T2:XT44:UTQY:SUTG:X4EL:3KBQ:R56A:SLJU Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:55 SystemTime:2023-03-15 20:16:19.1928563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path:C:\Program Files\Docker\cli-plu
gins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0315 20:16:20.049432    3876 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0315 20:16:20.052410    3876 start.go:296] selected driver: docker
	I0315 20:16:20.052410    3876 start.go:857] validating driver "docker" against &{Name:functional-919600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:functional-919600 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0315 20:16:20.052410    3876 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 20:16:20.116437    3876 out.go:177] 
	W0315 20:16:20.118422    3876 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0315 20:16:20.123423    3876 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (6.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 status
functional_test.go:849: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 status: (1.9410437s)
functional_test.go:855: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:855: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (2.0966122s)
functional_test.go:867: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 status -o json
functional_test.go:867: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 status -o json: (2.583171s)
--- PASS: TestFunctional/parallel/StatusCmd (6.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1688: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 addons list
functional_test.go:1700: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (153.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9dcba916-9d23-4882-b2d3-ae81f1b63ae2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0429281s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-919600 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-919600 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:69: (dbg) Done: kubectl --context functional-919600 apply -f testdata/storage-provisioner/pvc.yaml: (1.0039241s)
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-919600 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-919600 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-919600 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Done: kubectl --context functional-919600 apply -f testdata/storage-provisioner/pod.yaml: (1.097513s)
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [36fa20b8-2d14-45f3-bd43-6e4f8e689c8a] Pending
helpers_test.go:344: "sp-pod" [36fa20b8-2d14-45f3-bd43-6e4f8e689c8a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [36fa20b8-2d14-45f3-bd43-6e4f8e689c8a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 1m3.1592567s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-919600 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-919600 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-919600 delete -f testdata/storage-provisioner/pod.yaml: (3.1851544s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-919600 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2126303a-a554-4f67-a81b-1ed5a2da72fb] Pending
helpers_test.go:344: "sp-pod" [2126303a-a554-4f67-a81b-1ed5a2da72fb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2126303a-a554-4f67-a81b-1ed5a2da72fb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 1m14.0860645s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-919600 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (153.65s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (3.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1723: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 ssh "echo hello"
functional_test.go:1723: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 ssh "echo hello": (1.6761518s)
functional_test.go:1740: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 ssh "cat /etc/hostname"
functional_test.go:1740: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 ssh "cat /etc/hostname": (1.9268727s)
--- PASS: TestFunctional/parallel/SSHCmd (3.60s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (6.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 cp testdata\cp-test.txt /home/docker/cp-test.txt: (1.3661198s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 ssh -n functional-919600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 ssh -n functional-919600 "sudo cat /home/docker/cp-test.txt": (1.6744133s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 cp functional-919600:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd1146629129\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 cp functional-919600:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd1146629129\001\cp-test.txt: (1.6923033s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 ssh -n functional-919600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 ssh -n functional-919600 "sudo cat /home/docker/cp-test.txt": (2.0322404s)
--- PASS: TestFunctional/parallel/CpCmd (6.77s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (103.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1788: (dbg) Run:  kubectl --context functional-919600 replace --force -f testdata\mysql.yaml
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-888f84dd9-7zsss" [96269edf-0af7-42d1-9220-e34e0a3c94ff] Pending
helpers_test.go:344: "mysql-888f84dd9-7zsss" [96269edf-0af7-42d1-9220-e34e0a3c94ff] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-888f84dd9-7zsss" [96269edf-0af7-42d1-9220-e34e0a3c94ff] Running
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m20.1681173s
functional_test.go:1802: (dbg) Run:  kubectl --context functional-919600 exec mysql-888f84dd9-7zsss -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-919600 exec mysql-888f84dd9-7zsss -- mysql -ppassword -e "show databases;": exit status 1 (607.2842ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-919600 exec mysql-888f84dd9-7zsss -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-919600 exec mysql-888f84dd9-7zsss -- mysql -ppassword -e "show databases;": exit status 1 (511.323ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-919600 exec mysql-888f84dd9-7zsss -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-919600 exec mysql-888f84dd9-7zsss -- mysql -ppassword -e "show databases;": exit status 1 (531.4089ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-919600 exec mysql-888f84dd9-7zsss -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-919600 exec mysql-888f84dd9-7zsss -- mysql -ppassword -e "show databases;": exit status 1 (591.3326ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-919600 exec mysql-888f84dd9-7zsss -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-919600 exec mysql-888f84dd9-7zsss -- mysql -ppassword -e "show databases;": exit status 1 (593.4715ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-919600 exec mysql-888f84dd9-7zsss -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-919600 exec mysql-888f84dd9-7zsss -- mysql -ppassword -e "show databases;": exit status 1 (576.2513ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-919600 exec mysql-888f84dd9-7zsss -- mysql -ppassword -e "show databases;"
E0315 20:20:21.809487    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/MySQL (103.75s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1924: Checking for existence of /etc/test/nested/copy/8812/hosts within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 ssh "sudo cat /etc/test/nested/copy/8812/hosts"
functional_test.go:1926: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 ssh "sudo cat /etc/test/nested/copy/8812/hosts": (1.6671133s)
functional_test.go:1931: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (11.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1967: Checking for existence of /etc/ssl/certs/8812.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 ssh "sudo cat /etc/ssl/certs/8812.pem"
functional_test.go:1968: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 ssh "sudo cat /etc/ssl/certs/8812.pem": (1.9485912s)
functional_test.go:1967: Checking for existence of /usr/share/ca-certificates/8812.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 ssh "sudo cat /usr/share/ca-certificates/8812.pem"
functional_test.go:1968: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 ssh "sudo cat /usr/share/ca-certificates/8812.pem": (1.9141315s)
functional_test.go:1967: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1968: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 ssh "sudo cat /etc/ssl/certs/51391683.0"
E0315 20:15:49.632683    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
functional_test.go:1968: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 ssh "sudo cat /etc/ssl/certs/51391683.0": (1.9942479s)
functional_test.go:1994: Checking for existence of /etc/ssl/certs/88122.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 ssh "sudo cat /etc/ssl/certs/88122.pem"
functional_test.go:1995: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 ssh "sudo cat /etc/ssl/certs/88122.pem": (1.7838483s)
functional_test.go:1994: Checking for existence of /usr/share/ca-certificates/88122.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 ssh "sudo cat /usr/share/ca-certificates/88122.pem"
functional_test.go:1995: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 ssh "sudo cat /usr/share/ca-certificates/88122.pem": (1.8896021s)
functional_test.go:1994: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1995: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1995: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (2.0329374s)
--- PASS: TestFunctional/parallel/CertSync (11.57s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:217: (dbg) Run:  kubectl --context functional-919600 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2022: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 ssh "sudo systemctl is-active crio"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-919600 ssh "sudo systemctl is-active crio": exit status 1 (1.5860472s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2283: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2283: (dbg) Done: out/minikube-windows-amd64.exe license: (1.7620513s)
--- PASS: TestFunctional/parallel/License (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2251: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 version --short
--- PASS: TestFunctional/parallel/Version/short (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (3.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2265: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 version -o=json --components
functional_test.go:2265: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 version -o=json --components: (3.0729336s)
--- PASS: TestFunctional/parallel/Version/components (3.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 image ls --format short
functional_test.go:259: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 image ls --format short: (1.2428297s)
functional_test.go:264: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-919600 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.6
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-919600
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-919600
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 image ls --format table
functional_test.go:259: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 image ls --format table: (1.5074954s)
functional_test.go:264: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-919600 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/localhost/my-image                | functional-919600 | b084ec955c12a | 1.24MB |
| docker.io/library/nginx                     | latest            | 904b8cb13b932 | 142MB  |
| registry.k8s.io/kube-proxy                  | v1.26.2           | 6f64e7135a6ec | 65.6MB |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| registry.k8s.io/pause                       | 3.6               | 6270bb605e12e | 683kB  |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-apiserver              | v1.26.2           | 63d3239c3c159 | 134MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-919600 | fe618d14ce274 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.26.2           | 240e201d5b0d8 | 123MB  |
| registry.k8s.io/kube-scheduler              | v1.26.2           | db8f409d9a5d7 | 56.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/nginx                     | alpine            | 2bc7edbc3cf2f | 40.7MB |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| gcr.io/google-containers/addon-resizer      | functional-919600 | ffd4cfbbe753e | 32.9MB |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 image ls --format json
functional_test.go:259: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 image ls --format json: (1.7065481s)
functional_test.go:264: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-919600 image ls --format json:
[{"id":"fe618d14ce274c5cf55648cf9a1c6148f9eabdaa827deba8fb9b24706190df00","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-919600"],"size":"30"},{"id":"63d3239c3c159b1db368f8cf0d597bef7bd4c82e15cd1b99a93fc7b50f255901","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.2"],"size":"134000000"},{"id":"6f64e7135a6ec1adfb0c12e1864b0e8392facac43717a2c6911550740ab3992d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.2"],"size":"65599999"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-919600"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b
6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"904b8cb13b932e23230836850610fa45dce9eb0650d5618c2b1487c2a4f577b8","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.6"],"size":"683000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner
:v5"],"size":"31500000"},{"id":"240e201d5b0d8c6ae66764165080c22834e3a9fed050cf5780211d973644ac1e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.2"],"size":"123000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"b084ec955c12adffb690e0a4ba79b4c5f586e8e4fcd94e396b605d8ed74119b3","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-919600"],"size":"1240000"},{"id":"db8f409d9a5d7c775876eb5e4e0c69089eff801fefbd8a356621a7b0f640f58c","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.2"],"size":"56300000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 image ls --format yaml
functional_test.go:259: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 image ls --format yaml: (1.3160301s)
functional_test.go:264: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-919600 image ls --format yaml:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: fe618d14ce274c5cf55648cf9a1c6148f9eabdaa827deba8fb9b24706190df00
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-919600
size: "30"
- id: db8f409d9a5d7c775876eb5e4e0c69089eff801fefbd8a356621a7b0f640f58c
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.2
size: "56300000"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.6
size: "683000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-919600
size: "32900000"
- id: 63d3239c3c159b1db368f8cf0d597bef7bd4c82e15cd1b99a93fc7b50f255901
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.2
size: "134000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 904b8cb13b932e23230836850610fa45dce9eb0650d5618c2b1487c2a4f577b8
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 240e201d5b0d8c6ae66764165080c22834e3a9fed050cf5780211d973644ac1e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.2
size: "123000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 6f64e7135a6ec1adfb0c12e1864b0e8392facac43717a2c6911550740ab3992d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.2
size: "65599999"
- id: 2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (14.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 ssh pgrep buildkitd
functional_test.go:306: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-919600 ssh pgrep buildkitd: exit status 1 (1.7894179s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 image build -t localhost/my-image:functional-919600 testdata\build
functional_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 image build -t localhost/my-image:functional-919600 testdata\build: (10.9076171s)
functional_test.go:318: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-919600 image build -t localhost/my-image:functional-919600 testdata\build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in e1885b2df92e
Removing intermediate container e1885b2df92e
---> 5c46751c0966
Step 3/3 : ADD content.txt /
---> b084ec955c12
Successfully built b084ec955c12
Successfully tagged localhost/my-image:functional-919600
functional_test.go:321: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-919600 image build -t localhost/my-image:functional-919600 testdata\build:
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
functional_test.go:446: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 image ls
functional_test.go:446: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 image ls: (1.7852611s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (14.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:340: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:340: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.5532463s)
functional_test.go:345: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-919600
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (32.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-919600 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1443: (dbg) Run:  kubectl --context functional-919600 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6fddd6858d-jr64h" [1e615e6e-d4e3-4797-8ecb-7a09cda1f6aa] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6fddd6858d-jr64h" [1e615e6e-d4e3-4797-8ecb-7a09cda1f6aa] Running
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 32.1118713s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (32.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (17.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:353: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 image load --daemon gcr.io/google-containers/addon-resizer:functional-919600
functional_test.go:353: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 image load --daemon gcr.io/google-containers/addon-resizer:functional-919600: (15.9255785s)
functional_test.go:446: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 image ls
functional_test.go:446: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 image ls: (2.057573s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (17.98s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-919600 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (38.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-919600 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:147: (dbg) Done: kubectl --context functional-919600 apply -f testdata\testsvc.yaml: (1.3345708s)
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b1e69b08-b183-488b-b0d8-11b13fd876a4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b1e69b08-b183-488b-b0d8-11b13fd876a4] Running
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 37.157336s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (38.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 image load --daemon gcr.io/google-containers/addon-resizer:functional-919600
functional_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 image load --daemon gcr.io/google-containers/addon-resizer:functional-919600: (5.4329224s)
functional_test.go:446: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 image ls
functional_test.go:446: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 image ls: (1.172979s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (15.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:233: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:233: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.5016261s)
functional_test.go:238: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-919600
functional_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 image load --daemon gcr.io/google-containers/addon-resizer:functional-919600
functional_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 image load --daemon gcr.io/google-containers/addon-resizer:functional-919600: (11.6265576s)
functional_test.go:446: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 image ls
functional_test.go:446: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 image ls: (1.367491s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (15.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1457: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 service list
E0315 20:15:21.816836    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
functional_test.go:1457: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 service list: (2.4314491s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1487: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 service list -o json
functional_test.go:1487: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 service list -o json: (1.8361216s)
functional_test.go:1492: Took "1.8363418s" to run "out/minikube-windows-amd64.exe -p functional-919600 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1507: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 service --namespace=default --https --url hello-node
functional_test.go:1507: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-919600 service --namespace=default --https --url hello-node: exit status 1 (15.0334616s)

                                                
                                                
-- stdout --
	https://127.0.0.1:62030

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1520: found endpoint: https://127.0.0.1:62030
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-919600 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-919600 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1928: TerminateProcess: Access is denied.
helpers_test.go:508: unable to kill pid 5272: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (6.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:378: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 image save gcr.io/google-containers/addon-resizer:functional-919600 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar
functional_test.go:378: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 image save gcr.io/google-containers/addon-resizer:functional-919600 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (6.2618073s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (6.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (3.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 image rm gcr.io/google-containers/addon-resizer:functional-919600
functional_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 image rm gcr.io/google-containers/addon-resizer:functional-919600: (1.8866477s)
functional_test.go:446: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 image ls
functional_test.go:446: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 image ls: (1.616213s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (3.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1538: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 service hello-node --url --format={{.IP}}
functional_test.go:1538: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-919600 service hello-node --url --format={{.IP}}: exit status 1 (15.0409755s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (12.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:407: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar
functional_test.go:407: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (10.8893297s)
functional_test.go:446: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 image ls
functional_test.go:446: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 image ls: (1.7053561s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (12.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (11.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:417: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-919600
functional_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 image save --daemon gcr.io/google-containers/addon-resizer:functional-919600
functional_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 image save --daemon gcr.io/google-containers/addon-resizer:functional-919600: (10.9304821s)
functional_test.go:427: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-919600
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (11.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1557: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 service hello-node --url
functional_test.go:1557: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-919600 service hello-node --url: exit status 1 (15.0327891s)

                                                
                                                
-- stdout --
	http://127.0.0.1:62063

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1563: found endpoint for hello-node: http://127.0.0.1:62063
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (7.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:494: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-919600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-919600"
functional_test.go:494: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-919600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-919600": (4.650462s)
functional_test.go:517: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-919600 docker-env | Invoke-Expression ; docker images"
functional_test.go:517: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-919600 docker-env | Invoke-Expression ; docker images": (2.7755199s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (7.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2114: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 update-context --alsologtostderr -v=2
functional_test.go:2114: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 update-context --alsologtostderr -v=2: (1.0583771s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2114: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2114: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-919600 update-context --alsologtostderr -v=2
functional_test.go:2114: (dbg) Done: out/minikube-windows-amd64.exe -p functional-919600 update-context --alsologtostderr -v=2: (1.0192312s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (3.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1273: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.519983s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (3.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1308: (dbg) Done: out/minikube-windows-amd64.exe profile list: (2.0108033s)
functional_test.go:1313: Took "2.0108033s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1327: Took "384.7193ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1359: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (1.9060049s)
functional_test.go:1364: Took "1.9060049s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1377: Took "410.5436ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (2.32s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (1.14s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-919600
--- PASS: TestFunctional/delete_addon-resizer_images (1.14s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.23s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:196: (dbg) Run:  docker rmi -f localhost/my-image:functional-919600
--- PASS: TestFunctional/delete_my-image_image (0.23s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.25s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:204: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-919600
--- PASS: TestFunctional/delete_minikube_cached_images (0.25s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (4.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-787700
image_test.go:73: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-787700: (4.7659907s)
--- PASS: TestImageBuild/serial/NormalBuild (4.77s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (6.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-787700
image_test.go:94: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-787700: (6.9233322s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (6.92s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.89s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-787700
image_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-787700: (1.8925091s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.89s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-787700
image_test.go:83: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-787700: (1.5057406s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.51s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (108.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-976400 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0315 20:24:48.858647    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
E0315 20:24:48.873775    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
E0315 20:24:48.889698    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
E0315 20:24:48.921471    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
E0315 20:24:48.968324    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
E0315 20:24:49.060488    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
E0315 20:24:49.232879    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
E0315 20:24:49.554878    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
E0315 20:24:50.210816    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
E0315 20:24:51.501356    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
E0315 20:24:54.066913    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
E0315 20:24:59.193809    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-976400 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker: (1m48.5240067s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (108.52s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (62.77s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-976400 addons enable ingress --alsologtostderr -v=5
E0315 20:25:09.442095    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
E0315 20:25:21.813707    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
E0315 20:25:29.936081    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
E0315 20:26:10.905295    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-976400 addons enable ingress --alsologtostderr -v=5: (1m2.7728173s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (62.77s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (2.06s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-976400 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-976400 addons enable ingress-dns --alsologtostderr -v=5: (2.0635618s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (2.06s)

                                                
                                    
x
+
TestJSONOutput/start/Command (106.03s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-372200 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
E0315 20:27:32.827092    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-372200 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (1m46.0252394s)
--- PASS: TestJSONOutput/start/Command (106.03s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.15s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-372200 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-372200 --output=json --user=testUser: (2.153361s)
--- PASS: TestJSONOutput/pause/Command (2.15s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.99s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-372200 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-372200 --output=json --user=testUser: (1.9908964s)
--- PASS: TestJSONOutput/unpause/Command (1.99s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.93s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-372200 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-372200 --output=json --user=testUser: (13.9321197s)
--- PASS: TestJSONOutput/stop/Command (13.93s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.81s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-623000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-623000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (326.387ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2ff28b37-5fdc-48cb-934e-7093744a902e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-623000] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2728 Build 19045.2728","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6409c92f-9e68-46e5-b2f0-5cf052dd2945","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube1\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"4676963b-e72a-45ef-aa42-a092f72b609f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"20cbd944-59a4-497a-9e9d-a7544c8db816","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"d2da07b1-bf25-4c47-bb3e-f79b16aa51b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16056"}}
	{"specversion":"1.0","id":"582db5db-11c2-4632-a5a0-fcd7e80e95e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e1eb6515-3648-4d7c-91c5-8fe9553208f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-623000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-623000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-623000: (1.4853329s)
--- PASS: TestErrorJSONOutput (1.81s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (94.63s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-925300 --network=
E0315 20:29:48.858419    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
E0315 20:30:16.671770    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
E0315 20:30:21.803756    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-925300 --network=: (1m27.9038835s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-925300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-925300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-925300: (6.5121048s)
--- PASS: TestKicCustomNetwork/create_custom_network (94.63s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (94.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-167100 --network=bridge
E0315 20:31:13.096661    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
E0315 20:31:13.112402    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
E0315 20:31:13.127799    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
E0315 20:31:13.159325    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
E0315 20:31:13.205955    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
E0315 20:31:13.287756    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
E0315 20:31:13.460265    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
E0315 20:31:13.788789    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
E0315 20:31:14.436863    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
E0315 20:31:15.731722    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
E0315 20:31:18.302725    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
E0315 20:31:23.435937    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
E0315 20:31:33.680956    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
E0315 20:31:54.161745    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-167100 --network=bridge: (1m28.2541265s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-167100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-167100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-167100: (5.562758s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (94.08s)

                                                
                                    
x
+
TestKicExistingNetwork (95.24s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-053700 --network=existing-network
E0315 20:32:35.132442    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
E0315 20:33:57.054085    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-053700 --network=existing-network: (1m27.8990325s)
helpers_test.go:175: Cleaning up "existing-network-053700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-053700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-053700: (5.8308484s)
--- PASS: TestKicExistingNetwork (95.24s)

                                                
                                    
x
+
TestKicCustomSubnet (95.62s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-712900 --subnet=192.168.60.0/24
E0315 20:34:48.866155    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
E0315 20:35:21.813256    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-712900 --subnet=192.168.60.0/24: (1m28.915796s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-712900 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-712900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-712900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-712900: (6.4625848s)
--- PASS: TestKicCustomSubnet (95.62s)

                                                
                                    
x
+
TestKicStaticIP (96.28s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p static-ip-877200 --static-ip=192.168.200.200
E0315 20:36:13.096248    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
E0315 20:36:40.904193    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p static-ip-877200 --static-ip=192.168.200.200: (1m28.7822055s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe -p static-ip-877200 ip
helpers_test.go:175: Cleaning up "static-ip-877200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p static-ip-877200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p static-ip-877200: (6.6338245s)
--- PASS: TestKicStaticIP (96.28s)

                                                
                                    
x
+
TestMainNoArgs (0.28s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.28s)

                                                
                                    
x
+
TestMinikubeProfile (207.35s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-422200 --driver=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-422200 --driver=docker: (1m39.510224s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-422200 --driver=docker
E0315 20:39:48.849730    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
E0315 20:40:21.803538    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-422200 --driver=docker: (1m27.750621s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-422200
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (2.704853s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-422200
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (3.1614278s)
helpers_test.go:175: Cleaning up "second-422200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-422200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-422200: (6.7708146s)
helpers_test.go:175: Cleaning up "first-422200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-422200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-422200: (6.2258873s)
--- PASS: TestMinikubeProfile (207.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (25.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-155000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-155000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (24.2675291s)
E0315 20:41:12.037793    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
--- PASS: TestMountStart/serial/StartWithMountFirst (25.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (1.46s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-155000 ssh -- ls /minikube-host
E0315 20:41:13.098278    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-155000 ssh -- ls /minikube-host: (1.4590633s)
--- PASS: TestMountStart/serial/VerifyMountFirst (1.46s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-155000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-155000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (23.0671506s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (1.47s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-155000 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-155000 ssh -- ls /minikube-host: (1.4660412s)
--- PASS: TestMountStart/serial/VerifyMountSecond (1.47s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (4.93s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-155000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-155000 --alsologtostderr -v=5: (4.9262544s)
--- PASS: TestMountStart/serial/DeleteFirst (4.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (1.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-155000 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-155000 ssh -- ls /minikube-host: (1.4373651s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (1.44s)

                                                
                                    
x
+
TestMountStart/serial/Stop (3.02s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-155000
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-155000: (3.016236s)
--- PASS: TestMountStart/serial/Stop (3.02s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (16.63s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-155000
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-155000: (15.6174865s)
--- PASS: TestMountStart/serial/RestartStopped (16.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (1.49s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-155000 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-155000 ssh -- ls /minikube-host: (1.4900748s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (1.49s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (186.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-009900 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
E0315 20:43:25.000056    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
E0315 20:44:48.862942    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
multinode_test.go:83: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-009900 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (3m3.3406232s)
multinode_test.go:89: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 status --alsologtostderr
multinode_test.go:89: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 status --alsologtostderr: (3.1553837s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (186.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (28.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-009900 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:479: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-009900 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (1.3169507s)
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-009900 -- rollout status deployment/busybox
E0315 20:45:21.812739    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-009900 -- rollout status deployment/busybox: (20.9021593s)
multinode_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-009900 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:503: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-009900 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:511: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-009900 -- exec busybox-6b86dd6d48-7prb4 -- nslookup kubernetes.io
multinode_test.go:511: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-009900 -- exec busybox-6b86dd6d48-7prb4 -- nslookup kubernetes.io: (1.1929657s)
multinode_test.go:511: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-009900 -- exec busybox-6b86dd6d48-b7rj9 -- nslookup kubernetes.io
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-009900 -- exec busybox-6b86dd6d48-7prb4 -- nslookup kubernetes.default
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-009900 -- exec busybox-6b86dd6d48-b7rj9 -- nslookup kubernetes.default
multinode_test.go:529: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-009900 -- exec busybox-6b86dd6d48-7prb4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:529: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-009900 -- exec busybox-6b86dd6d48-b7rj9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (28.42s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:539: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-009900 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:547: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-009900 -- exec busybox-6b86dd6d48-7prb4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:558: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-009900 -- exec busybox-6b86dd6d48-7prb4 -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:547: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-009900 -- exec busybox-6b86dd6d48-b7rj9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:558: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-009900 -- exec busybox-6b86dd6d48-b7rj9 -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (3.69s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (66.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-009900 -v 3 --alsologtostderr
E0315 20:46:13.107923    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
multinode_test.go:108: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-009900 -v 3 --alsologtostderr: (1m2.9485858s)
multinode_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 status --alsologtostderr: (3.9388259s)
--- PASS: TestMultiNode/serial/AddNode (66.89s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.7783069s)
--- PASS: TestMultiNode/serial/ProfileList (1.78s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (52.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 status --output json --alsologtostderr: (3.6968151s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 cp testdata\cp-test.txt multinode-009900:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 cp testdata\cp-test.txt multinode-009900:/home/docker/cp-test.txt: (1.5039191s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900 "sudo cat /home/docker/cp-test.txt": (1.4527356s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 cp multinode-009900:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1841071500\001\cp-test_multinode-009900.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 cp multinode-009900:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1841071500\001\cp-test_multinode-009900.txt: (1.4369905s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900 "sudo cat /home/docker/cp-test.txt": (1.5516766s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 cp multinode-009900:/home/docker/cp-test.txt multinode-009900-m02:/home/docker/cp-test_multinode-009900_multinode-009900-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 cp multinode-009900:/home/docker/cp-test.txt multinode-009900-m02:/home/docker/cp-test_multinode-009900_multinode-009900-m02.txt: (2.1659338s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900 "sudo cat /home/docker/cp-test.txt": (1.6935336s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900-m02 "sudo cat /home/docker/cp-test_multinode-009900_multinode-009900-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900-m02 "sudo cat /home/docker/cp-test_multinode-009900_multinode-009900-m02.txt": (1.5314049s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 cp multinode-009900:/home/docker/cp-test.txt multinode-009900-m03:/home/docker/cp-test_multinode-009900_multinode-009900-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 cp multinode-009900:/home/docker/cp-test.txt multinode-009900-m03:/home/docker/cp-test_multinode-009900_multinode-009900-m03.txt: (2.2793239s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900 "sudo cat /home/docker/cp-test.txt": (1.5210045s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900-m03 "sudo cat /home/docker/cp-test_multinode-009900_multinode-009900-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900-m03 "sudo cat /home/docker/cp-test_multinode-009900_multinode-009900-m03.txt": (1.4662681s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 cp testdata\cp-test.txt multinode-009900-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 cp testdata\cp-test.txt multinode-009900-m02:/home/docker/cp-test.txt: (1.4775937s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900-m02 "sudo cat /home/docker/cp-test.txt": (1.4403521s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 cp multinode-009900-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1841071500\001\cp-test_multinode-009900-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 cp multinode-009900-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1841071500\001\cp-test_multinode-009900-m02.txt: (1.3878441s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900-m02 "sudo cat /home/docker/cp-test.txt": (1.5373669s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 cp multinode-009900-m02:/home/docker/cp-test.txt multinode-009900:/home/docker/cp-test_multinode-009900-m02_multinode-009900.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 cp multinode-009900-m02:/home/docker/cp-test.txt multinode-009900:/home/docker/cp-test_multinode-009900-m02_multinode-009900.txt: (2.1064024s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900-m02 "sudo cat /home/docker/cp-test.txt": (1.4579743s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900 "sudo cat /home/docker/cp-test_multinode-009900-m02_multinode-009900.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900 "sudo cat /home/docker/cp-test_multinode-009900-m02_multinode-009900.txt": (1.4263352s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 cp multinode-009900-m02:/home/docker/cp-test.txt multinode-009900-m03:/home/docker/cp-test_multinode-009900-m02_multinode-009900-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 cp multinode-009900-m02:/home/docker/cp-test.txt multinode-009900-m03:/home/docker/cp-test_multinode-009900-m02_multinode-009900-m03.txt: (2.2424629s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900-m02 "sudo cat /home/docker/cp-test.txt": (1.5368635s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900-m03 "sudo cat /home/docker/cp-test_multinode-009900-m02_multinode-009900-m03.txt"
E0315 20:47:36.277314    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900-m03 "sudo cat /home/docker/cp-test_multinode-009900-m02_multinode-009900-m03.txt": (1.5223135s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 cp testdata\cp-test.txt multinode-009900-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 cp testdata\cp-test.txt multinode-009900-m03:/home/docker/cp-test.txt: (1.4621678s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900-m03 "sudo cat /home/docker/cp-test.txt": (1.4515499s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 cp multinode-009900-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1841071500\001\cp-test_multinode-009900-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 cp multinode-009900-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1841071500\001\cp-test_multinode-009900-m03.txt: (1.486924s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900-m03 "sudo cat /home/docker/cp-test.txt": (1.4904657s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 cp multinode-009900-m03:/home/docker/cp-test.txt multinode-009900:/home/docker/cp-test_multinode-009900-m03_multinode-009900.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 cp multinode-009900-m03:/home/docker/cp-test.txt multinode-009900:/home/docker/cp-test_multinode-009900-m03_multinode-009900.txt: (2.2487712s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900-m03 "sudo cat /home/docker/cp-test.txt": (1.5066719s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900 "sudo cat /home/docker/cp-test_multinode-009900-m03_multinode-009900.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900 "sudo cat /home/docker/cp-test_multinode-009900-m03_multinode-009900.txt": (1.4754486s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 cp multinode-009900-m03:/home/docker/cp-test.txt multinode-009900-m02:/home/docker/cp-test_multinode-009900-m03_multinode-009900-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 cp multinode-009900-m03:/home/docker/cp-test.txt multinode-009900-m02:/home/docker/cp-test_multinode-009900-m03_multinode-009900-m02.txt: (2.0685328s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900-m03 "sudo cat /home/docker/cp-test.txt": (1.5043577s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900-m02 "sudo cat /home/docker/cp-test_multinode-009900-m03_multinode-009900-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 ssh -n multinode-009900-m02 "sudo cat /home/docker/cp-test_multinode-009900-m03_multinode-009900-m02.txt": (1.4869341s)
--- PASS: TestMultiNode/serial/CopyFile (52.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (8.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 node stop m03: (2.8371002s)
multinode_test.go:214: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-009900 status: exit status 7 (2.8666538s)

                                                
                                                
-- stdout --
	multinode-009900
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-009900-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-009900-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-009900 status --alsologtostderr: exit status 7 (2.8501127s)

                                                
                                                
-- stdout --
	multinode-009900
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-009900-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-009900-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 20:47:59.690589    1676 out.go:296] Setting OutFile to fd 856 ...
	I0315 20:47:59.759521    1676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0315 20:47:59.760062    1676 out.go:309] Setting ErrFile to fd 1012...
	I0315 20:47:59.760107    1676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0315 20:47:59.770740    1676 out.go:303] Setting JSON to false
	I0315 20:47:59.770740    1676 mustload.go:65] Loading cluster: multinode-009900
	I0315 20:47:59.770740    1676 notify.go:220] Checking for updates...
	I0315 20:47:59.772231    1676 config.go:182] Loaded profile config "multinode-009900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0315 20:47:59.772270    1676 status.go:255] checking status of multinode-009900 ...
	I0315 20:47:59.791455    1676 cli_runner.go:164] Run: docker container inspect multinode-009900 --format={{.State.Status}}
	I0315 20:48:00.015462    1676 status.go:330] multinode-009900 host status = "Running" (err=<nil>)
	I0315 20:48:00.015462    1676 host.go:66] Checking if "multinode-009900" exists ...
	I0315 20:48:00.024411    1676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-009900
	I0315 20:48:00.280618    1676 host.go:66] Checking if "multinode-009900" exists ...
	I0315 20:48:00.293190    1676 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 20:48:00.300186    1676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-009900
	I0315 20:48:00.515723    1676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63132 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-009900\id_rsa Username:docker}
	I0315 20:48:00.672856    1676 ssh_runner.go:195] Run: systemctl --version
	I0315 20:48:00.702366    1676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 20:48:00.744494    1676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-009900
	I0315 20:48:00.972755    1676 kubeconfig.go:92] found "multinode-009900" server: "https://127.0.0.1:63131"
	I0315 20:48:00.972755    1676 api_server.go:165] Checking apiserver status ...
	I0315 20:48:00.983063    1676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 20:48:01.033569    1676 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2147/cgroup
	I0315 20:48:01.071747    1676 api_server.go:181] apiserver freezer: "20:freezer:/docker/1d7fe3c5e7b1d19d915c1adfbaac7d18a345c5afab65e2169913f2a9bfb245d7/kubepods/burstable/pod9ebd0241362c28b43fc583a6abfebf10/60b603e7b1b56e0e23d1caf1e3a08e43e1cf27e8d053cc886c57e5eb97599fa9"
	I0315 20:48:01.082203    1676 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1d7fe3c5e7b1d19d915c1adfbaac7d18a345c5afab65e2169913f2a9bfb245d7/kubepods/burstable/pod9ebd0241362c28b43fc583a6abfebf10/60b603e7b1b56e0e23d1caf1e3a08e43e1cf27e8d053cc886c57e5eb97599fa9/freezer.state
	I0315 20:48:01.113886    1676 api_server.go:203] freezer state: "THAWED"
	I0315 20:48:01.114012    1676 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63131/healthz ...
	I0315 20:48:01.135941    1676 api_server.go:278] https://127.0.0.1:63131/healthz returned 200:
	ok
	I0315 20:48:01.135941    1676 status.go:421] multinode-009900 apiserver status = Running (err=<nil>)
	I0315 20:48:01.135941    1676 status.go:257] multinode-009900 status: &{Name:multinode-009900 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 20:48:01.135941    1676 status.go:255] checking status of multinode-009900-m02 ...
	I0315 20:48:01.152903    1676 cli_runner.go:164] Run: docker container inspect multinode-009900-m02 --format={{.State.Status}}
	I0315 20:48:01.388881    1676 status.go:330] multinode-009900-m02 host status = "Running" (err=<nil>)
	I0315 20:48:01.388881    1676 host.go:66] Checking if "multinode-009900-m02" exists ...
	I0315 20:48:01.398023    1676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-009900-m02
	I0315 20:48:01.624601    1676 host.go:66] Checking if "multinode-009900-m02" exists ...
	I0315 20:48:01.634580    1676 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 20:48:01.641581    1676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-009900-m02
	I0315 20:48:01.874174    1676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63217 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-009900-m02\id_rsa Username:docker}
	I0315 20:48:02.027074    1676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 20:48:02.061691    1676 status.go:257] multinode-009900-m02 status: &{Name:multinode-009900-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0315 20:48:02.061837    1676 status.go:255] checking status of multinode-009900-m03 ...
	I0315 20:48:02.080146    1676 cli_runner.go:164] Run: docker container inspect multinode-009900-m03 --format={{.State.Status}}
	I0315 20:48:02.321948    1676 status.go:330] multinode-009900-m03 host status = "Stopped" (err=<nil>)
	I0315 20:48:02.322120    1676 status.go:343] host is not running, skipping remaining checks
	I0315 20:48:02.322120    1676 status.go:257] multinode-009900-m03 status: &{Name:multinode-009900-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (8.56s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (30.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 node start m03 --alsologtostderr: (25.8621703s)
multinode_test.go:259: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 status
multinode_test.go:259: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 status: (3.6477273s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (30.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (142.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-009900
multinode_test.go:288: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-009900
multinode_test.go:288: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-009900: (27.8741522s)
multinode_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-009900 --wait=true -v=8 --alsologtostderr
E0315 20:49:48.859712    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
E0315 20:50:21.814290    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
multinode_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-009900 --wait=true -v=8 --alsologtostderr: (1m53.6769437s)
multinode_test.go:298: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-009900
--- PASS: TestMultiNode/serial/RestartKeepsNodes (142.15s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (16.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 node delete m03: (10.9762324s)
multinode_test.go:398: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 status --alsologtostderr
multinode_test.go:398: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 status --alsologtostderr: (4.7540477s)
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (16.58s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (26.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 stop
E0315 20:51:13.099821    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
multinode_test.go:312: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 stop: (25.2339481s)
multinode_test.go:318: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-009900 status: exit status 7 (794.2264ms)

                                                
                                                
-- stdout --
	multinode-009900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-009900-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-009900 status --alsologtostderr: exit status 7 (756.8731ms)

                                                
                                                
-- stdout --
	multinode-009900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-009900-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 20:51:37.404223    2376 out.go:296] Setting OutFile to fd 936 ...
	I0315 20:51:37.478947    2376 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0315 20:51:37.478947    2376 out.go:309] Setting ErrFile to fd 952...
	I0315 20:51:37.478947    2376 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0315 20:51:37.489899    2376 out.go:303] Setting JSON to false
	I0315 20:51:37.489899    2376 mustload.go:65] Loading cluster: multinode-009900
	I0315 20:51:37.489899    2376 notify.go:220] Checking for updates...
	I0315 20:51:37.490909    2376 config.go:182] Loaded profile config "multinode-009900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0315 20:51:37.490909    2376 status.go:255] checking status of multinode-009900 ...
	I0315 20:51:37.506904    2376 cli_runner.go:164] Run: docker container inspect multinode-009900 --format={{.State.Status}}
	I0315 20:51:37.714392    2376 status.go:330] multinode-009900 host status = "Stopped" (err=<nil>)
	I0315 20:51:37.714392    2376 status.go:343] host is not running, skipping remaining checks
	I0315 20:51:37.714392    2376 status.go:257] multinode-009900 status: &{Name:multinode-009900 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 20:51:37.714392    2376 status.go:255] checking status of multinode-009900-m02 ...
	I0315 20:51:37.731390    2376 cli_runner.go:164] Run: docker container inspect multinode-009900-m02 --format={{.State.Status}}
	I0315 20:51:37.952406    2376 status.go:330] multinode-009900-m02 host status = "Stopped" (err=<nil>)
	I0315 20:51:37.952406    2376 status.go:343] host is not running, skipping remaining checks
	I0315 20:51:37.952406    2376 status.go:257] multinode-009900-m02 status: &{Name:multinode-009900-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (26.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (89.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-009900 --wait=true -v=8 --alsologtostderr --driver=docker
multinode_test.go:352: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-009900 --wait=true -v=8 --alsologtostderr --driver=docker: (1m25.483697s)
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-009900 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-009900 status --alsologtostderr: (2.5310637s)
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (89.35s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (98.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-009900
multinode_test.go:450: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-009900-m02 --driver=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-009900-m02 --driver=docker: exit status 14 (328.3019ms)

                                                
                                                
-- stdout --
	* [multinode-009900-m02] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2728 Build 19045.2728
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16056
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-009900-m02' is duplicated with machine name 'multinode-009900-m02' in profile 'multinode-009900'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-009900-m03 --driver=docker
multinode_test.go:458: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-009900-m03 --driver=docker: (1m28.0743611s)
multinode_test.go:465: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-009900
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-009900: exit status 80 (1.6448768s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-009900
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-009900-m03 already exists in multinode-009900-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube_node_17615de98fc431ce4460405c35b285c54151ae7f_11.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-009900-m03
multinode_test.go:470: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-009900-m03: (7.9396209s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (98.26s)

                                                
                                    
x
+
TestPreload (234.11s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-357200 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4
E0315 20:55:21.814447    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
E0315 20:56:13.098294    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-357200 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4: (2m30.1906709s)
preload_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-357200 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-357200 -- docker pull gcr.io/k8s-minikube/busybox: (2.9738222s)
preload_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-357200
preload_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-357200: (13.346052s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-357200 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker
E0315 20:57:52.050898    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-357200 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker: (59.4661679s)
preload_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-357200 -- docker images
preload_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-357200 -- docker images: (1.511295s)
helpers_test.go:175: Cleaning up "test-preload-357200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-357200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-357200: (6.6256265s)
--- PASS: TestPreload (234.11s)

                                                
                                    
x
+
TestScheduledStopWindows (162.4s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-410000 --memory=2048 --driver=docker
E0315 20:59:48.860175    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
E0315 21:00:05.013667    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-410000 --memory=2048 --driver=docker: (1m28.5797287s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-410000 --schedule 5m
E0315 21:00:21.801148    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-410000 --schedule 5m: (1.7841528s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-410000 -n scheduled-stop-410000
scheduled_stop_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-410000 -n scheduled-stop-410000: (1.6748084s)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-410000 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-410000 -- sudo systemctl show minikube-scheduled-stop --no-page: (1.459323s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-410000 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-410000 --schedule 5s: (2.7515591s)
E0315 21:01:13.107861    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-410000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-410000: exit status 7 (517.6428ms)

                                                
                                                
-- stdout --
	scheduled-stop-410000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-410000 -n scheduled-stop-410000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-410000 -n scheduled-stop-410000: exit status 7 (516.7431ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-410000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-410000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-410000: (5.1045992s)
--- PASS: TestScheduledStopWindows (162.40s)

                                                
                                    
x
+
TestInsufficientStorage (58.7s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-646500 --memory=2048 --output=json --wait=true --driver=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-646500 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (50.2197983s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"96077dba-9013-4eed-9685-b0b621d87974","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-646500] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2728 Build 19045.2728","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"78b28cc9-561c-4f08-ad85-e329a6b7c067","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube1\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"28054a72-f730-43ef-ad93-9a3bbd56f0e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4ef8b2f6-85f6-4d76-99f7-d55424666db3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"5b4ca780-c5a0-4a86-8097-d6e825b52d82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16056"}}
	{"specversion":"1.0","id":"56cf5836-5508-4248-a002-57a89d3fbf64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"16a28b9c-dbc3-46f6-aca5-42e54a026f4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b4bfa968-c74c-4fc6-aa57-72caa6713a91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"9d52dbb2-e069-42c4-956c-445fdd693849","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cab64b81-501b-4035-b67f-35842ff34d50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"0cedc49b-e918-4ef8-a181-a0e9f65f501e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-646500 in cluster insufficient-storage-646500","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c8d2b364-0b15-45a8-a50f-3e53c8cae55f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"96fb42d2-2839-498b-8762-b03338b55acb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0cfa4013-22f7-44d9-9dd9-df9644993cab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-646500 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-646500 --output=json --layout=cluster: exit status 7 (1.619525s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-646500","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-646500","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 21:02:26.860879    7376 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-646500" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-646500 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-646500 --output=json --layout=cluster: exit status 7 (1.4897635s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-646500","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-646500","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 21:02:28.354094    9488 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-646500" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	E0315 21:02:28.400896    9488 status.go:559] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\insufficient-storage-646500\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-646500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-646500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-646500: (5.3716275s)
--- PASS: TestInsufficientStorage (58.70s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (281.49s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.9.0.3943698612.exe start -p running-upgrade-928100 --memory=2200 --vm-driver=docker
version_upgrade_test.go:128: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.9.0.3943698612.exe start -p running-upgrade-928100 --memory=2200 --vm-driver=docker: (3m18.6845709s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-928100 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:138: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-928100 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m8.109274s)
helpers_test.go:175: Cleaning up "running-upgrade-928100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-928100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-928100: (13.7115591s)
--- PASS: TestRunningBinaryUpgrade (281.49s)

                                                
                                    
x
+
TestKubernetesUpgrade (336.6s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-759100 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:230: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-759100 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: (1m46.4303462s)
version_upgrade_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-759100
version_upgrade_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-759100: (5.963204s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-759100 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-759100 status --format={{.Host}}: exit status 7 (713.9824ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-759100 --memory=2200 --kubernetes-version=v1.26.2 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:251: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-759100 --memory=2200 --kubernetes-version=v1.26.2 --alsologtostderr -v=1 --driver=docker: (1m51.6995322s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-759100 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-759100 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-759100 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker: exit status 106 (476.0146ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-759100] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2728 Build 19045.2728
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16056
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-759100
	    minikube start -p kubernetes-upgrade-759100 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7591002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.2, by running:
	    
	    minikube start -p kubernetes-upgrade-759100 --kubernetes-version=v1.26.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-759100 --memory=2200 --kubernetes-version=v1.26.2 --alsologtostderr -v=1 --driver=docker
E0315 21:10:21.804754    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
version_upgrade_test.go:283: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-759100 --memory=2200 --kubernetes-version=v1.26.2 --alsologtostderr -v=1 --driver=docker: (1m35.5474805s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-759100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-759100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-759100: (15.4311551s)
--- PASS: TestKubernetesUpgrade (336.60s)

                                                
                                    
x
+
TestMissingContainerUpgrade (317.19s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:317: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.9.1.4283197659.exe start -p missing-upgrade-022900 --memory=2200 --driver=docker
version_upgrade_test.go:317: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.9.1.4283197659.exe start -p missing-upgrade-022900 --memory=2200 --driver=docker: (3m38.4212359s)
version_upgrade_test.go:326: (dbg) Run:  docker stop missing-upgrade-022900
version_upgrade_test.go:326: (dbg) Done: docker stop missing-upgrade-022900: (3.9373671s)
version_upgrade_test.go:331: (dbg) Run:  docker rm missing-upgrade-022900
version_upgrade_test.go:337: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-022900 --memory=2200 --alsologtostderr -v=1 --driver=docker
E0315 21:09:48.845731    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
version_upgrade_test.go:337: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-022900 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m22.5858633s)
helpers_test.go:175: Cleaning up "missing-upgrade-022900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-022900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-022900: (11.1025033s)
--- PASS: TestMissingContainerUpgrade (317.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-050900 --no-kubernetes --kubernetes-version=1.20 --driver=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-050900 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (483.2269ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-050900] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2728 Build 19045.2728
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16056
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (180.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-050900 --driver=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-050900 --driver=docker: (2m58.6430585s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-050900 status -o json
no_kubernetes_test.go:200: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-050900 status -o json: (1.8764503s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (180.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (309.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.9.0.4163353556.exe start -p stopped-upgrade-050900 --memory=2200 --vm-driver=docker
E0315 21:04:16.276593    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
E0315 21:04:48.852536    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
E0315 21:05:21.812507    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
version_upgrade_test.go:191: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.9.0.4163353556.exe start -p stopped-upgrade-050900 --memory=2200 --vm-driver=docker: (3m41.9966818s)
version_upgrade_test.go:200: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.9.0.4163353556.exe -p stopped-upgrade-050900 stop
version_upgrade_test.go:200: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.9.0.4163353556.exe -p stopped-upgrade-050900 stop: (20.6392767s)
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-050900 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-050900 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m6.5753054s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (309.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (49.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-050900 --no-kubernetes --driver=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-050900 --no-kubernetes --driver=docker: (36.7744257s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-050900 status -o json
E0315 21:06:13.096652    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-050900 status -o json: exit status 2 (1.8465691s)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-050900","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-050900
no_kubernetes_test.go:124: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-050900: (10.4853011s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (49.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (33.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-050900 --no-kubernetes --driver=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-050900 --no-kubernetes --driver=docker: (33.874531s)
--- PASS: TestNoKubernetes/serial/Start (33.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (1.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-050900 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-050900 "sudo systemctl is-active --quiet service kubelet": exit status 1 (1.6003125s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (1.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (10.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-windows-amd64.exe profile list: (4.9607922s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (5.1406997s)
--- PASS: TestNoKubernetes/serial/ProfileList (10.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-050900
no_kubernetes_test.go:158: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-050900: (3.4780434s)
--- PASS: TestNoKubernetes/serial/Stop (3.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (17.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-050900 --driver=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-050900 --driver=docker: (17.8859281s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (17.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (1.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-050900 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-050900 "sudo systemctl is-active --quiet service kubelet": exit status 1 (1.6246616s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (1.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (6.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-050900
version_upgrade_test.go:214: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-050900: (6.9268238s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (6.93s)

                                                
                                    
x
+
TestPause/serial/Start (139.44s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-073300 --memory=2048 --install-addons=false --wait=all --driver=docker
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-073300 --memory=2048 --install-addons=false --wait=all --driver=docker: (2m19.4418363s)
--- PASS: TestPause/serial/Start (139.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (186.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-103800 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-103800 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: (3m6.2482329s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (186.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (184.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-470000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.26.2
E0315 21:14:32.056926    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-470000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.26.2: (3m4.2710955s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (184.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (140.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-348900 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.26.2
E0315 21:16:13.100900    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-348900 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.26.2: (2m20.6028848s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (140.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (14.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-103800 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9c753c30-5394-4688-a078-b46877ed4ddb] Pending
helpers_test.go:344: "busybox" [9c753c30-5394-4688-a078-b46877ed4ddb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9c753c30-5394-4688-a078-b46877ed4ddb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 13.2528083s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-103800 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (14.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (17.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-470000 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ad4fb253-fffd-4a1e-a5b5-f54fe9cad145] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ad4fb253-fffd-4a1e-a5b5-f54fe9cad145] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 16.1935378s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-470000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (17.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-103800 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-103800 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.8097944s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-103800 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (135.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-942900 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.26.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-942900 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.26.2: (2m15.0388157s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (135.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (15.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-103800 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-103800 --alsologtostderr -v=3: (15.6920499s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (15.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (4.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-470000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-470000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.5476269s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-470000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (4.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (15.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-470000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-470000 --alsologtostderr -v=3: (15.7715042s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (15.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-103800 -n old-k8s-version-103800
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-103800 -n old-k8s-version-103800: exit status 7 (706.9948ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-103800 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (1.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (444.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-103800 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-103800 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: (7m22.3962491s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-103800 -n old-k8s-version-103800
E0315 21:24:48.859116    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-103800 -n old-k8s-version-103800: (2.1565961s)
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (444.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (2.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-470000 -n no-preload-470000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-470000 -n no-preload-470000: exit status 7 (694.5109ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-470000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-470000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (1.4801727s)
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (2.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (431.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-470000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.26.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-470000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.26.2: (7m8.7890741s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-470000 -n no-preload-470000
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-470000 -n no-preload-470000: (2.8436291s)
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (431.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (16.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-348900 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) Done: kubectl --context embed-certs-348900 create -f testdata\busybox.yaml: (1.0538995s)
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1989ca8e-6dad-4893-8019-07e4238f96f3] Pending
helpers_test.go:344: "busybox" [1989ca8e-6dad-4893-8019-07e4238f96f3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1989ca8e-6dad-4893-8019-07e4238f96f3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 15.1649645s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-348900 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (16.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-348900 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-348900 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.3269907s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-348900 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (17.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-348900 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-348900 --alsologtostderr -v=3: (17.6240798s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (17.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (1.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-348900 -n embed-certs-348900
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-348900 -n embed-certs-348900: exit status 7 (648.3546ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-348900 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-348900 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (1.2628379s)
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (1.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (370.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-348900 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.26.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-348900 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.26.2: (6m8.247496s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-348900 -n embed-certs-348900
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-348900 -n embed-certs-348900: (2.2951919s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (370.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-942900 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4e680817-9ad7-44ea-8092-caf9a9086fb5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4e680817-9ad7-44ea-8092-caf9a9086fb5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.0367869s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-942900 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-942900 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-942900 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.2094551s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-942900 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-942900 --alsologtostderr -v=3
E0315 21:19:48.860046    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-942900 --alsologtostderr -v=3: (13.9592824s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-942900 -n default-k8s-diff-port-942900
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-942900 -n default-k8s-diff-port-942900: exit status 7 (593.0064ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-942900 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (710.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-942900 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.26.2
E0315 21:20:21.799636    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
E0315 21:20:56.290291    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
E0315 21:21:13.090574    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-942900 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.26.2: (11m48.5066112s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-942900 -n default-k8s-diff-port-942900
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-942900 -n default-k8s-diff-port-942900: (1.9273417s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (710.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (61.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-ltlr2" [8f97a05b-56ef-41d5-acc3-ca9eda962a34] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-ltlr2" [8f97a05b-56ef-41d5-acc3-ca9eda962a34] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 1m1.098761s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (61.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (54.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-845b8" [14266fa2-6942-46b3-ba36-b0a0d7b3380a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-845b8" [14266fa2-6942-46b3-ba36-b0a0d7b3380a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 54.0761456s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (54.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-jn66v" [021ba610-297f-4925-87d4-d1413922e77e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0477842s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-jn66v" [021ba610-297f-4925-87d4-d1413922e77e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0426713s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-103800 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p old-k8s-version-103800 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p old-k8s-version-103800 "sudo crictl images -o json": (1.9769204s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (14.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-103800 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-103800 --alsologtostderr -v=1: (3.3742071s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-103800 -n old-k8s-version-103800
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-103800 -n old-k8s-version-103800: exit status 2 (2.090023s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-103800 -n old-k8s-version-103800
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-103800 -n old-k8s-version-103800: exit status 2 (2.1600958s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-103800 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-103800 --alsologtostderr -v=1: (2.6791472s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-103800 -n old-k8s-version-103800
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-103800 -n old-k8s-version-103800: (2.3962532s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-103800 -n old-k8s-version-103800
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-103800 -n old-k8s-version-103800: (1.9347045s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (14.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (140.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-530900 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.26.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-530900 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.26.2: (2m20.6765553s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (140.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-ltlr2" [8f97a05b-56ef-41d5-acc3-ca9eda962a34] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0236057s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-348900 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-845b8" [14266fa2-6942-46b3-ba36-b0a0d7b3380a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.027584s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-470000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (2.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p embed-certs-348900 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p embed-certs-348900 "sudo crictl images -o json": (2.0988483s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (2.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (2.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p no-preload-470000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p no-preload-470000 "sudo crictl images -o json": (2.0998276s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (2.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (18.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-348900 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-348900 --alsologtostderr -v=1: (3.5155376s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-348900 -n embed-certs-348900
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-348900 -n embed-certs-348900: exit status 2 (2.1474441s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-348900 -n embed-certs-348900
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-348900 -n embed-certs-348900: exit status 2 (2.5005896s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-348900 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-348900 --alsologtostderr -v=1: (4.9605908s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-348900 -n embed-certs-348900
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-348900 -n embed-certs-348900: (2.8153922s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-348900 -n embed-certs-348900
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-348900 -n embed-certs-348900: (2.9210204s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (18.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (16.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-470000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-470000 --alsologtostderr -v=1: (3.466493s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-470000 -n no-preload-470000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-470000 -n no-preload-470000: exit status 2 (2.1164465s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-470000 -n no-preload-470000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-470000 -n no-preload-470000: exit status 2 (2.2618473s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-470000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-470000 --alsologtostderr -v=1: (3.7211831s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-470000 -n no-preload-470000
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-470000 -n no-preload-470000: (3.1465109s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-470000 -n no-preload-470000
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-470000 -n no-preload-470000: (1.6588494s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (16.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (116.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-899600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-899600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker: (1m56.7692115s)
--- PASS: TestNetworkPlugins/group/auto/Start (116.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (268.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-899600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker
E0315 21:26:50.849851    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-103800\client.crt: The system cannot find the path specified.
E0315 21:26:50.865102    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-103800\client.crt: The system cannot find the path specified.
E0315 21:26:50.881044    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-103800\client.crt: The system cannot find the path specified.
E0315 21:26:50.912169    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-103800\client.crt: The system cannot find the path specified.
E0315 21:26:50.960109    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-103800\client.crt: The system cannot find the path specified.
E0315 21:26:51.055342    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-103800\client.crt: The system cannot find the path specified.
E0315 21:26:51.229591    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-103800\client.crt: The system cannot find the path specified.
E0315 21:26:51.559412    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-103800\client.crt: The system cannot find the path specified.
E0315 21:26:52.207824    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-103800\client.crt: The system cannot find the path specified.
E0315 21:26:53.498460    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-103800\client.crt: The system cannot find the path specified.
E0315 21:26:55.693611    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt: The system cannot find the path specified.
E0315 21:26:55.709285    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt: The system cannot find the path specified.
E0315 21:26:55.724503    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt: The system cannot find the path specified.
E0315 21:26:55.756253    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt: The system cannot find the path specified.
E0315 21:26:55.804009    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt: The system cannot find the path specified.
E0315 21:26:55.897812    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt: The system cannot find the path specified.
E0315 21:26:56.071768    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt: The system cannot find the path specified.
E0315 21:26:56.072022    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-103800\client.crt: The system cannot find the path specified.
E0315 21:26:56.401987    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt: The system cannot find the path specified.
E0315 21:26:57.048835    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt: The system cannot find the path specified.
E0315 21:26:58.332028    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt: The system cannot find the path specified.
E0315 21:27:00.905523    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt: The system cannot find the path specified.
E0315 21:27:01.220315    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-103800\client.crt: The system cannot find the path specified.
E0315 21:27:06.026898    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt: The system cannot find the path specified.
E0315 21:27:11.472153    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-103800\client.crt: The system cannot find the path specified.
E0315 21:27:16.280077    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt: The system cannot find the path specified.
E0315 21:27:31.965386    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-103800\client.crt: The system cannot find the path specified.
E0315 21:27:36.768113    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt: The system cannot find the path specified.
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-899600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker: (4m28.1404243s)
--- PASS: TestNetworkPlugins/group/calico/Start (268.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-530900 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-530900 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.3120491s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (14.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-530900 --alsologtostderr -v=3
E0315 21:28:12.932928    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-103800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-530900 --alsologtostderr -v=3: (14.4606975s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (14.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-530900 -n newest-cni-530900
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-530900 -n newest-cni-530900: exit status 7 (598.4195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-530900 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (64.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-530900 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.26.2
E0315 21:28:17.741617    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-530900 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.26.2: (1m1.9892727s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-530900 -n newest-cni-530900
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-530900 -n newest-cni-530900: (2.7774725s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (64.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (1.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-899600 "pgrep -a kubelet"
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p auto-899600 "pgrep -a kubelet": (1.7743535s)
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (1.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (29.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-899600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-cf57g" [8c51f477-837d-443f-8eda-a3bca927503d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-cf57g" [8c51f477-837d-443f-8eda-a3bca927503d] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 29.0733439s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (29.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-899600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-899600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-899600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p newest-cni-530900 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p newest-cni-530900 "sudo crictl images -o json": (2.3508661s)
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (2.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (16.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-530900 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-530900 --alsologtostderr -v=1: (4.0326096s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-530900 -n newest-cni-530900
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-530900 -n newest-cni-530900: exit status 2 (2.2619143s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-530900 -n newest-cni-530900
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-530900 -n newest-cni-530900: exit status 2 (2.2765195s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-530900 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-530900 --alsologtostderr -v=1: (2.8156716s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-530900 -n newest-cni-530900
E0315 21:29:34.867893    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-103800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-530900 -n newest-cni-530900: (3.0081122s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-530900 -n newest-cni-530900
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-530900 -n newest-cni-530900: (2.5299874s)
--- PASS: TestStartStop/group/newest-cni/serial/Pause (16.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (140.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-899600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-899600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker: (2m20.2176641s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (140.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (117.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-899600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p false-899600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker: (1m57.0542518s)
--- PASS: TestNetworkPlugins/group/false/Start (117.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-nvftb" [a83bb160-08e5-41e7-9d02-076ee8097f48] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.0539567s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (1.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-899600 "pgrep -a kubelet"
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p calico-899600 "pgrep -a kubelet": (1.6065914s)
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (1.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (28.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-899600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-hxf94" [92d3f8b4-afe5-412a-b6f0-c9b264d8dee0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0315 21:31:12.059153    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
E0315 21:31:13.102078    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-694fc96674-hxf94" [92d3f8b4-afe5-412a-b6f0-c9b264d8dee0] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 27.0635114s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (28.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-899600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-899600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-899600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-wj7fb" [3c1eb236-9dc7-4c39-8d1b-ca5243c3a3a0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0505749s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-wj7fb" [3c1eb236-9dc7-4c39-8d1b-ca5243c3a3a0] Running
E0315 21:31:50.849117    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-103800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0214199s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-942900 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p default-k8s-diff-port-942900 "sudo crictl images -o json"
E0315 21:31:55.690037    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p default-k8s-diff-port-942900 "sudo crictl images -o json": (1.85165s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (12.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-942900 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-942900 --alsologtostderr -v=1: (2.5003555s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-942900 -n default-k8s-diff-port-942900
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-942900 -n default-k8s-diff-port-942900: exit status 2 (1.7970756s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-942900 -n default-k8s-diff-port-942900
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-942900 -n default-k8s-diff-port-942900: exit status 2 (1.6407196s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-942900 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-942900 --alsologtostderr -v=1: (2.3978136s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-942900 -n default-k8s-diff-port-942900
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-942900 -n default-k8s-diff-port-942900: (2.2005679s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-942900 -n default-k8s-diff-port-942900
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-942900 -n default-k8s-diff-port-942900: (2.157955s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (12.69s)
E0315 21:38:33.119613    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\auto-899600\client.crt: The system cannot find the path specified.
E0315 21:38:44.025588    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\custom-flannel-899600\client.crt: The system cannot find the path specified.
E0315 21:38:46.662072    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-899600\client.crt: The system cannot find the path specified.

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (1.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-899600 "pgrep -a kubelet"
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p custom-flannel-899600 "pgrep -a kubelet": (1.5987743s)
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (1.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (58.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-899600 replace --force -f testdata\netcat-deployment.yaml
E0315 21:32:18.718111    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-103800\client.crt: The system cannot find the path specified.
net_test.go:148: (dbg) Done: kubectl --context custom-flannel-899600 replace --force -f testdata\netcat-deployment.yaml: (6.8284273s)
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-gnccn" [ec171a6f-30a5-428a-8c59-1ccaa7ef27a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-gnccn" [ec171a6f-30a5-428a-8c59-1ccaa7ef27a6] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 51.8624605s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (58.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (159.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-899600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-899600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker: (2m39.7650653s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (159.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-899600 "pgrep -a kubelet"
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p false-899600 "pgrep -a kubelet": (2.0030599s)
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (2.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (46.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-899600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-5f5pz" [a1cced96-74c1-4b70-b6dd-6375429073dc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-5f5pz" [a1cced96-74c1-4b70-b6dd-6375429073dc] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 45.1137586s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (46.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-899600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-899600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-899600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-899600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (177s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-899600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-899600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker: (2m56.9974731s)
--- PASS: TestNetworkPlugins/group/flannel/Start (177.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-899600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-899600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (130.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-899600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker
E0315 21:34:48.847473    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-899600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker: (2m10.7052328s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (130.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (112.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-899600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker
E0315 21:34:55.162635    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\auto-899600\client.crt: The system cannot find the path specified.
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-899600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker: (1m52.8234342s)
--- PASS: TestNetworkPlugins/group/bridge/Start (112.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-bz2d2" [b7193935-09a1-478b-b536-0d5418645c7a] Running
E0315 21:35:04.651692    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-942900\client.crt: The system cannot find the path specified.
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.0431492s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (1.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-899600 "pgrep -a kubelet"
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kindnet-899600 "pgrep -a kubelet": (1.5308358s)
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (1.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (48.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-899600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-pkx8p" [eb7618af-31ee-440d-964b-335da2bae4f8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0315 21:35:21.802315    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
E0315 21:35:45.613282    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-942900\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-694fc96674-pkx8p" [eb7618af-31ee-440d-964b-335da2bae4f8] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 48.092992s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (48.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-899600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-899600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-899600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-vmfb6" [377d7557-281e-4ada-93ad-eb1672cd7b36] Running
E0315 21:36:13.021484    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-899600\client.crt: The system cannot find the path specified.
E0315 21:36:13.099550    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-976400\client.crt: The system cannot find the path specified.
E0315 21:36:17.084697    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\auto-899600\client.crt: The system cannot find the path specified.
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.040756s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (1.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-899600 "pgrep -a kubelet"
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p flannel-899600 "pgrep -a kubelet": (1.7426628s)
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (1.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (27.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-899600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-tnnn4" [a6609914-a2c2-449a-93fa-d01af8b8ef74] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0315 21:36:23.276219    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-899600\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-694fc96674-tnnn4" [a6609914-a2c2-449a-93fa-d01af8b8ef74] Running
E0315 21:36:43.760311    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-899600\client.crt: The system cannot find the path specified.
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 26.0854014s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (27.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-899600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-899600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-899600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (1.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-899600 "pgrep -a kubelet"
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p bridge-899600 "pgrep -a kubelet": (1.633393s)
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (1.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (37.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-899600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-dhwrz" [60fe0cba-9ba4-47e4-8686-27316d349285] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0315 21:36:50.848594    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-103800\client.crt: The system cannot find the path specified.
E0315 21:36:55.693841    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-694fc96674-dhwrz" [60fe0cba-9ba4-47e4-8686-27316d349285] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 36.1068905s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (37.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (2.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-899600 "pgrep -a kubelet"
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-899600 "pgrep -a kubelet": (2.1304361s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (2.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (41.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-899600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-6dnhg" [b55f5a3e-d9d0-4f97-ab6d-db3ebd28623c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0315 21:37:07.542717    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-942900\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-694fc96674-6dnhg" [b55f5a3e-d9d0-4f97-ab6d-db3ebd28623c] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 40.0860628s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (41.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (7.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-899600 exec deployment/netcat -- nslookup kubernetes.default
E0315 21:37:27.184065    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\custom-flannel-899600\client.crt: The system cannot find the path specified.
E0315 21:37:28.787127    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\false-899600\client.crt: The system cannot find the path specified.
E0315 21:37:28.802550    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\false-899600\client.crt: The system cannot find the path specified.
E0315 21:37:28.818076    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\false-899600\client.crt: The system cannot find the path specified.
E0315 21:37:28.849833    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\false-899600\client.crt: The system cannot find the path specified.
E0315 21:37:28.896844    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\false-899600\client.crt: The system cannot find the path specified.
E0315 21:37:28.990071    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\false-899600\client.crt: The system cannot find the path specified.
E0315 21:37:29.161608    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\false-899600\client.crt: The system cannot find the path specified.
E0315 21:37:29.492745    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\false-899600\client.crt: The system cannot find the path specified.
E0315 21:37:30.138284    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\false-899600\client.crt: The system cannot find the path specified.
E0315 21:37:31.433041    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\false-899600\client.crt: The system cannot find the path specified.
E0315 21:37:32.318836    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\custom-flannel-899600\client.crt: The system cannot find the path specified.
net_test.go:174: (dbg) Done: kubectl --context bridge-899600 exec deployment/netcat -- nslookup kubernetes.default: (7.40477s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (7.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-899600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0315 21:37:34.005455    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\false-899600\client.crt: The system cannot find the path specified.
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (119.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-899600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-899600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker: (1m59.4261267s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (119.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (1.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-899600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:248: (dbg) Done: kubectl --context bridge-899600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": (1.0721219s)
--- PASS: TestNetworkPlugins/group/bridge/HairPin (1.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-899600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-899600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-899600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (1.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-899600 "pgrep -a kubelet"
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kubenet-899600 "pgrep -a kubelet": (1.431839s)
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (1.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (26.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-899600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-6z558" [c06c1565-7212-4434-822a-0d062c8f2495] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0315 21:39:48.846829    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
E0315 21:39:51.387420    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-942900\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-694fc96674-6z558" [c06c1565-7212-4434-822a-0d062c8f2495] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 25.1311872s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (26.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-899600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-899600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-899600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.47s)

                                                
                                    

Test skip (25/305)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (31.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 53.614ms
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-q6bh8" [7f00b402-1dbb-4b2d-ab17-3213e4da8c22] Running
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.1012039s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-p788q" [33f3cd6b-f3e5-45ce-901c-b9c7987f51de] Running
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0762794s
addons_test.go:305: (dbg) Run:  kubectl --context addons-553600 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-553600 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:310: (dbg) Done: kubectl --context addons-553600 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (21.1091765s)
addons_test.go:320: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (31.79s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (63.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-553600 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:197: (dbg) Run:  kubectl --context addons-553600 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:197: (dbg) Done: kubectl --context addons-553600 replace --force -f testdata\nginx-ingress-v1.yaml: (6.9224124s)
addons_test.go:210: (dbg) Run:  kubectl --context addons-553600 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:210: (dbg) Done: kubectl --context addons-553600 replace --force -f testdata\nginx-pod-svc.yaml: (1.6897875s)
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b7a8559a-da57-463e-8708-08c0229a0d26] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b7a8559a-da57-463e-8708-08c0229a0d26] Running
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 52.2760594s
addons_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-553600 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe -p addons-553600 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (1.5781666s)
addons_test.go:247: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (63.03s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-919600 --alsologtostderr -v=1]
functional_test.go:911: output didn't produce a URL
functional_test.go:905: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-919600 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 10188: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:60: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-919600 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1633: (dbg) Run:  kubectl --context functional-919600 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-kqchd" [d80d3336-a9f8-41e6-815a-2d28fb46fc3e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-5cf7cc858f-kqchd" [d80d3336-a9f8-41e6-815a-2d28fb46fc3e] Running
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.1636008s
functional_test.go:1644: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (13.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:193: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (52.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:177: (dbg) Run:  kubectl --context ingress-addon-legacy-976400 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Done: kubectl --context ingress-addon-legacy-976400 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.6061632s)
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-976400 replace --force -f testdata\nginx-ingress-v1beta1.yaml
addons_test.go:197: (dbg) Done: kubectl --context ingress-addon-legacy-976400 replace --force -f testdata\nginx-ingress-v1beta1.yaml: (1.6174477s)
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-976400 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:210: (dbg) Done: kubectl --context ingress-addon-legacy-976400 replace --force -f testdata\nginx-pod-svc.yaml: (1.4858682s)
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [bd21fe32-ce48-415d-809e-c0954ac399e8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0315 20:26:44.995285    8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
helpers_test.go:344: "nginx" [bd21fe32-ce48-415d-809e-c0954ac399e8] Running
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 34.1918766s
addons_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-976400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-976400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (1.4520216s)
addons_test.go:247: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestIngressAddonLegacy/serial/ValidateIngressAddons (52.44s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-365200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-365200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-365200: (1.9975817s)
--- SKIP: TestStartStop/group/disable-driver-mounts (2.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (21.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-899600 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: 
* context was not found for specified context: cilium-899600
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: 
* context was not found for specified context: cilium-899600
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: 
* context was not found for specified context: cilium-899600
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: 
* context was not found for specified context: cilium-899600
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: 
* context was not found for specified context: cilium-899600
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: 
* context was not found for specified context: cilium-899600
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: 
* context was not found for specified context: cilium-899600
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: 
* context was not found for specified context: cilium-899600
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: 
* context was not found for specified context: cilium-899600
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: 
* context was not found for specified context: cilium-899600
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: 
* context was not found for specified context: cilium-899600
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-899600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-899600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-899600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-899600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-899600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-899600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-899600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-899600" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: 
* context was not found for specified context: cilium-899600
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: 
* context was not found for specified context: cilium-899600
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-899600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-899600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: 
* context was not found for specified context: cilium-899600
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: 
* context was not found for specified context: cilium-899600
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-899600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-899600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-899600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-899600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-899600" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt
extensions:
- extension:
last-update: Wed, 15 Mar 2023 21:09:45 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://127.0.0.1:64657
name: cert-expiration-023900
contexts:
- context:
cluster: cert-expiration-023900
extensions:
- extension:
last-update: Wed, 15 Mar 2023 21:09:45 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: context_info
namespace: default
user: cert-expiration-023900
name: cert-expiration-023900
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-023900
user:
client-certificate: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-023900\client.crt
client-key: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-023900\client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: 
* context was not found for specified context: cilium-899600
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-899600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-899600"

                                                
                                                
----------------------- debugLogs end: cilium-899600 [took: 19.5539756s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-899600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-899600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cilium-899600: (1.8285593s)
--- SKIP: TestNetworkPlugins/group/cilium (21.38s)

                                                
                                    
Copied to clipboard