Test Report: Hyperkit_macOS 15565

                    
                      b70896c80ee4e66ab69b71a68ac4d59d2145555e:2023-01-08:27335
                    
                

Test fail (2/301)

Order failed test Duration
231 TestPause/serial/SecondStartNoReconfiguration 55.05
315 TestNetworkPlugins/group/kubenet/HairPin 54.16
x
+
TestPause/serial/SecondStartNoReconfiguration (55.05s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-132406 --alsologtostderr -v=1 --driver=hyperkit 

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-132406 --alsologtostderr -v=1 --driver=hyperkit : (48.437582205s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-132406] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3013/.minikube
	* Using the hyperkit driver based on existing profile
	* Starting control plane node pause-132406 in cluster pause-132406
	* Updating the running hyperkit "pause-132406" VM ...
	* Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	* Done! kubectl is now configured to use "pause-132406" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 13:24:59.144440   11017 out.go:296] Setting OutFile to fd 1 ...
	I0108 13:24:59.144700   11017 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 13:24:59.144706   11017 out.go:309] Setting ErrFile to fd 2...
	I0108 13:24:59.144710   11017 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 13:24:59.144819   11017 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3013/.minikube/bin
	I0108 13:24:59.145302   11017 out.go:303] Setting JSON to false
	I0108 13:24:59.165166   11017 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5073,"bootTime":1673208026,"procs":427,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0108 13:24:59.165262   11017 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0108 13:24:59.187636   11017 out.go:177] * [pause-132406] minikube v1.28.0 on Darwin 13.0.1
	I0108 13:24:59.229531   11017 notify.go:220] Checking for updates...
	I0108 13:24:59.250258   11017 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 13:24:59.271341   11017 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3013/kubeconfig
	I0108 13:24:59.292311   11017 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 13:24:59.313208   11017 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 13:24:59.334331   11017 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3013/.minikube
	I0108 13:24:59.355668   11017 config.go:180] Loaded profile config "pause-132406": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 13:24:59.356032   11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:24:59.356077   11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 13:24:59.363072   11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52862
	I0108 13:24:59.363470   11017 main.go:134] libmachine: () Calling .GetVersion
	I0108 13:24:59.363894   11017 main.go:134] libmachine: Using API Version  1
	I0108 13:24:59.363904   11017 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 13:24:59.364148   11017 main.go:134] libmachine: () Calling .GetMachineName
	I0108 13:24:59.364248   11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
	I0108 13:24:59.364376   11017 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 13:24:59.364666   11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:24:59.364695   11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 13:24:59.371900   11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52864
	I0108 13:24:59.372300   11017 main.go:134] libmachine: () Calling .GetVersion
	I0108 13:24:59.372643   11017 main.go:134] libmachine: Using API Version  1
	I0108 13:24:59.372655   11017 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 13:24:59.372850   11017 main.go:134] libmachine: () Calling .GetMachineName
	I0108 13:24:59.372953   11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
	I0108 13:24:59.400218   11017 out.go:177] * Using the hyperkit driver based on existing profile
	I0108 13:24:59.421285   11017 start.go:294] selected driver: hyperkit
	I0108 13:24:59.421306   11017 start.go:838] validating driver "hyperkit" against &{Name:pause-132406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.28.0-1673190013-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.25.3 ClusterName:pause-132406 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.27 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:24:59.421422   11017 start.go:849] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 13:24:59.421481   11017 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 13:24:59.421599   11017 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15565-3013/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0108 13:24:59.428642   11017 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.28.0
	I0108 13:24:59.432002   11017 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:24:59.432023   11017 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0108 13:24:59.434365   11017 cni.go:95] Creating CNI manager for ""
	I0108 13:24:59.434384   11017 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 13:24:59.434399   11017 start_flags.go:317] config:
	{Name:pause-132406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.28.0-1673190013-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:pause-132406 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.27 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:24:59.434556   11017 iso.go:125] acquiring lock: {Name:mk509bccdb22b8c95ebe7c0f784c1151265efda4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 13:24:59.476204   11017 out.go:177] * Starting control plane node pause-132406 in cluster pause-132406
	I0108 13:24:59.497317   11017 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 13:24:59.497392   11017 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-3013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0108 13:24:59.497432   11017 cache.go:57] Caching tarball of preloaded images
	I0108 13:24:59.497617   11017 preload.go:174] Found /Users/jenkins/minikube-integration/15565-3013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 13:24:59.497637   11017 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0108 13:24:59.497768   11017 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/config.json ...
	I0108 13:24:59.498436   11017 cache.go:193] Successfully downloaded all kic artifacts
	I0108 13:24:59.498484   11017 start.go:364] acquiring machines lock for pause-132406: {Name:mk29e5f49e96ee5817a491da62b8738aae3fb506 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 13:24:59.498571   11017 start.go:368] acquired machines lock for "pause-132406" in 69.225µs
	I0108 13:24:59.498618   11017 start.go:96] Skipping create...Using existing machine configuration
	I0108 13:24:59.498629   11017 fix.go:55] fixHost starting: 
	I0108 13:24:59.499114   11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:24:59.499149   11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 13:24:59.506673   11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52866
	I0108 13:24:59.507029   11017 main.go:134] libmachine: () Calling .GetVersion
	I0108 13:24:59.507358   11017 main.go:134] libmachine: Using API Version  1
	I0108 13:24:59.507369   11017 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 13:24:59.507578   11017 main.go:134] libmachine: () Calling .GetMachineName
	I0108 13:24:59.507683   11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
	I0108 13:24:59.507764   11017 main.go:134] libmachine: (pause-132406) Calling .GetState
	I0108 13:24:59.507847   11017 main.go:134] libmachine: (pause-132406) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 13:24:59.507930   11017 main.go:134] libmachine: (pause-132406) DBG | hyperkit pid from json: 10839
	I0108 13:24:59.508870   11017 fix.go:103] recreateIfNeeded on pause-132406: state=Running err=<nil>
	W0108 13:24:59.508885   11017 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 13:24:59.553275   11017 out.go:177] * Updating the running hyperkit "pause-132406" VM ...
	I0108 13:24:59.574311   11017 machine.go:88] provisioning docker machine ...
	I0108 13:24:59.574343   11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
	I0108 13:24:59.574546   11017 main.go:134] libmachine: (pause-132406) Calling .GetMachineName
	I0108 13:24:59.574663   11017 buildroot.go:166] provisioning hostname "pause-132406"
	I0108 13:24:59.574678   11017 main.go:134] libmachine: (pause-132406) Calling .GetMachineName
	I0108 13:24:59.574785   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
	I0108 13:24:59.574903   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
	I0108 13:24:59.575002   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
	I0108 13:24:59.575111   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
	I0108 13:24:59.575214   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
	I0108 13:24:59.575380   11017 main.go:134] libmachine: Using SSH client type: native
	I0108 13:24:59.575607   11017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 192.168.64.27 22 <nil> <nil>}
	I0108 13:24:59.575620   11017 main.go:134] libmachine: About to run SSH command:
	sudo hostname pause-132406 && echo "pause-132406" | sudo tee /etc/hostname
	I0108 13:24:59.660545   11017 main.go:134] libmachine: SSH cmd err, output: <nil>: pause-132406
	
	I0108 13:24:59.660565   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
	I0108 13:24:59.660757   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
	I0108 13:24:59.660900   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
	I0108 13:24:59.661028   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
	I0108 13:24:59.661184   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
	I0108 13:24:59.661369   11017 main.go:134] libmachine: Using SSH client type: native
	I0108 13:24:59.661538   11017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 192.168.64.27 22 <nil> <nil>}
	I0108 13:24:59.661551   11017 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-132406' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-132406/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-132406' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 13:24:59.731442   11017 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 13:24:59.731463   11017 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-3013/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-3013/.minikube}
	I0108 13:24:59.731476   11017 buildroot.go:174] setting up certificates
	I0108 13:24:59.731493   11017 provision.go:83] configureAuth start
	I0108 13:24:59.731509   11017 main.go:134] libmachine: (pause-132406) Calling .GetMachineName
	I0108 13:24:59.731656   11017 main.go:134] libmachine: (pause-132406) Calling .GetIP
	I0108 13:24:59.731755   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
	I0108 13:24:59.731862   11017 provision.go:138] copyHostCerts
	I0108 13:24:59.731961   11017 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3013/.minikube/ca.pem, removing ...
	I0108 13:24:59.731972   11017 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3013/.minikube/ca.pem
	I0108 13:24:59.732122   11017 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-3013/.minikube/ca.pem (1082 bytes)
	I0108 13:24:59.732354   11017 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3013/.minikube/cert.pem, removing ...
	I0108 13:24:59.732367   11017 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3013/.minikube/cert.pem
	I0108 13:24:59.732482   11017 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-3013/.minikube/cert.pem (1123 bytes)
	I0108 13:24:59.732719   11017 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3013/.minikube/key.pem, removing ...
	I0108 13:24:59.732728   11017 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3013/.minikube/key.pem
	I0108 13:24:59.732794   11017 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-3013/.minikube/key.pem (1675 bytes)
	I0108 13:24:59.732941   11017 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-3013/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-3013/.minikube/certs/ca-key.pem org=jenkins.pause-132406 san=[192.168.64.27 192.168.64.27 localhost 127.0.0.1 minikube pause-132406]
	I0108 13:24:59.847140   11017 provision.go:172] copyRemoteCerts
	I0108 13:24:59.847204   11017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 13:24:59.847222   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
	I0108 13:24:59.847387   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
	I0108 13:24:59.847466   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
	I0108 13:24:59.847559   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
	I0108 13:24:59.847641   11017 sshutil.go:53] new ssh client: &{IP:192.168.64.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/pause-132406/id_rsa Username:docker}
	I0108 13:24:59.888266   11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 13:24:59.905918   11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0108 13:24:59.923973   11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 13:24:59.942131   11017 provision.go:86] duration metric: configureAuth took 210.621055ms
	I0108 13:24:59.942145   11017 buildroot.go:189] setting minikube options for container-runtime
	I0108 13:24:59.942323   11017 config.go:180] Loaded profile config "pause-132406": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 13:24:59.942338   11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
	I0108 13:24:59.942477   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
	I0108 13:24:59.942575   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
	I0108 13:24:59.942665   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
	I0108 13:24:59.942753   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
	I0108 13:24:59.942866   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
	I0108 13:24:59.943017   11017 main.go:134] libmachine: Using SSH client type: native
	I0108 13:24:59.943143   11017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 192.168.64.27 22 <nil> <nil>}
	I0108 13:24:59.943153   11017 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 13:25:00.012844   11017 main.go:134] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0108 13:25:00.012862   11017 buildroot.go:70] root file system type: tmpfs
	I0108 13:25:00.013031   11017 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 13:25:00.013062   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
	I0108 13:25:00.013209   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
	I0108 13:25:00.013317   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
	I0108 13:25:00.013432   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
	I0108 13:25:00.013556   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
	I0108 13:25:00.013811   11017 main.go:134] libmachine: Using SSH client type: native
	I0108 13:25:00.013985   11017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 192.168.64.27 22 <nil> <nil>}
	I0108 13:25:00.014047   11017 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 13:25:00.093902   11017 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 13:25:00.093932   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
	I0108 13:25:00.094100   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
	I0108 13:25:00.094195   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
	I0108 13:25:00.094319   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
	I0108 13:25:00.094441   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
	I0108 13:25:00.094615   11017 main.go:134] libmachine: Using SSH client type: native
	I0108 13:25:00.094739   11017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 192.168.64.27 22 <nil> <nil>}
	I0108 13:25:00.094752   11017 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 13:25:00.167191   11017 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 13:25:00.167204   11017 machine.go:91] provisioned docker machine in 592.87714ms
	I0108 13:25:00.167215   11017 start.go:300] post-start starting for "pause-132406" (driver="hyperkit")
	I0108 13:25:00.167223   11017 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 13:25:00.167235   11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
	I0108 13:25:00.167462   11017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 13:25:00.167476   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
	I0108 13:25:00.167587   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
	I0108 13:25:00.167685   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
	I0108 13:25:00.167791   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
	I0108 13:25:00.167893   11017 sshutil.go:53] new ssh client: &{IP:192.168.64.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/pause-132406/id_rsa Username:docker}
	I0108 13:25:00.210430   11017 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 13:25:00.213362   11017 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 13:25:00.213376   11017 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3013/.minikube/addons for local assets ...
	I0108 13:25:00.213473   11017 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3013/.minikube/files for local assets ...
	I0108 13:25:00.213637   11017 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-3013/.minikube/files/etc/ssl/certs/42012.pem -> 42012.pem in /etc/ssl/certs
	I0108 13:25:00.213816   11017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 13:25:00.219994   11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/files/etc/ssl/certs/42012.pem --> /etc/ssl/certs/42012.pem (1708 bytes)
	I0108 13:25:00.238777   11017 start.go:303] post-start completed in 71.550985ms
	I0108 13:25:00.238794   11017 fix.go:57] fixHost completed within 740.163594ms
	I0108 13:25:00.238809   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
	I0108 13:25:00.238949   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
	I0108 13:25:00.239043   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
	I0108 13:25:00.239154   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
	I0108 13:25:00.239244   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
	I0108 13:25:00.239373   11017 main.go:134] libmachine: Using SSH client type: native
	I0108 13:25:00.239484   11017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 192.168.64.27 22 <nil> <nil>}
	I0108 13:25:00.239492   11017 main.go:134] libmachine: About to run SSH command:
	date +%s.%N
	I0108 13:25:00.308816   11017 main.go:134] libmachine: SSH cmd err, output: <nil>: 1673213100.379604899
	
	I0108 13:25:00.308836   11017 fix.go:207] guest clock: 1673213100.379604899
	I0108 13:25:00.308845   11017 fix.go:220] Guest: 2023-01-08 13:25:00.379604899 -0800 PST Remote: 2023-01-08 13:25:00.238797 -0800 PST m=+1.144074135 (delta=140.807899ms)
	I0108 13:25:00.308870   11017 fix.go:191] guest clock delta is within tolerance: 140.807899ms
	I0108 13:25:00.308874   11017 start.go:83] releasing machines lock for "pause-132406", held for 810.289498ms
	I0108 13:25:00.308891   11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
	I0108 13:25:00.309018   11017 main.go:134] libmachine: (pause-132406) Calling .GetIP
	I0108 13:25:00.309095   11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
	I0108 13:25:00.309441   11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
	I0108 13:25:00.309566   11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
	I0108 13:25:00.309668   11017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 13:25:00.309712   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
	I0108 13:25:00.309763   11017 ssh_runner.go:195] Run: cat /version.json
	I0108 13:25:00.309785   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
	I0108 13:25:00.309831   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
	I0108 13:25:00.309903   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
	I0108 13:25:00.309978   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
	I0108 13:25:00.310053   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
	I0108 13:25:00.310076   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
	I0108 13:25:00.310168   11017 sshutil.go:53] new ssh client: &{IP:192.168.64.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/pause-132406/id_rsa Username:docker}
	I0108 13:25:00.310203   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
	I0108 13:25:00.310364   11017 sshutil.go:53] new ssh client: &{IP:192.168.64.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/pause-132406/id_rsa Username:docker}
	I0108 13:25:00.348771   11017 ssh_runner.go:195] Run: systemctl --version
	I0108 13:25:00.388455   11017 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 13:25:00.388568   11017 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 13:25:00.405608   11017 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0108 13:25:00.405629   11017 docker.go:543] Images already preloaded, skipping extraction
	I0108 13:25:00.405741   11017 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 13:25:00.416067   11017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 13:25:00.427519   11017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 13:25:00.437606   11017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 13:25:00.455265   11017 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 13:25:00.602128   11017 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 13:25:00.735308   11017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 13:25:00.866261   11017 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 13:25:17.774785   11017 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.90843721s)
	I0108 13:25:17.774853   11017 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 13:25:17.875458   11017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 13:25:17.976112   11017 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0108 13:25:17.984898   11017 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0108 13:25:17.984974   11017 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0108 13:25:17.988660   11017 start.go:472] Will wait 60s for crictl version
	I0108 13:25:17.988708   11017 ssh_runner.go:195] Run: sudo crictl version
	I0108 13:25:18.011758   11017 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.21
	RuntimeApiVersion:  1.41.0
	I0108 13:25:18.011842   11017 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 13:25:18.031040   11017 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 13:25:18.095550   11017 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	I0108 13:25:18.095681   11017 ssh_runner.go:195] Run: grep 192.168.64.1	host.minikube.internal$ /etc/hosts
	I0108 13:25:18.098401   11017 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 13:25:18.098471   11017 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 13:25:18.114724   11017 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0108 13:25:18.114737   11017 docker.go:543] Images already preloaded, skipping extraction
	I0108 13:25:18.114829   11017 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 13:25:18.130914   11017 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0108 13:25:18.130933   11017 cache_images.go:84] Images are preloaded, skipping loading
	I0108 13:25:18.131029   11017 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 13:25:18.151682   11017 cni.go:95] Creating CNI manager for ""
	I0108 13:25:18.151699   11017 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 13:25:18.151718   11017 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 13:25:18.151734   11017 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.27 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-132406 NodeName:pause-132406 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.27"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.27 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 13:25:18.151826   11017 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.64.27
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "pause-132406"
	  kubeletExtraArgs:
	    node-ip: 192.168.64.27
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.64.27"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 13:25:18.151903   11017 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-132406 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.27 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:pause-132406 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 13:25:18.151975   11017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 13:25:18.157772   11017 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 13:25:18.157827   11017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 13:25:18.163442   11017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (475 bytes)
	I0108 13:25:18.174575   11017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 13:25:18.185567   11017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2037 bytes)
	I0108 13:25:18.196553   11017 ssh_runner.go:195] Run: grep 192.168.64.27	control-plane.minikube.internal$ /etc/hosts
	I0108 13:25:18.198959   11017 certs.go:54] Setting up /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406 for IP: 192.168.64.27
	I0108 13:25:18.199061   11017 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-3013/.minikube/ca.key
	I0108 13:25:18.199112   11017 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-3013/.minikube/proxy-client-ca.key
	I0108 13:25:18.199198   11017 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.key
	I0108 13:25:18.199262   11017 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/apiserver.key.e04425f9
	I0108 13:25:18.199314   11017 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/proxy-client.key
	I0108 13:25:18.199539   11017 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/Users/jenkins/minikube-integration/15565-3013/.minikube/certs/4201.pem (1338 bytes)
	W0108 13:25:18.199577   11017 certs.go:384] ignoring /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/Users/jenkins/minikube-integration/15565-3013/.minikube/certs/4201_empty.pem, impossibly tiny 0 bytes
	I0108 13:25:18.199589   11017 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/Users/jenkins/minikube-integration/15565-3013/.minikube/certs/ca-key.pem (1679 bytes)
	I0108 13:25:18.199629   11017 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/Users/jenkins/minikube-integration/15565-3013/.minikube/certs/ca.pem (1082 bytes)
	I0108 13:25:18.199665   11017 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/Users/jenkins/minikube-integration/15565-3013/.minikube/certs/cert.pem (1123 bytes)
	I0108 13:25:18.199700   11017 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/Users/jenkins/minikube-integration/15565-3013/.minikube/certs/key.pem (1675 bytes)
	I0108 13:25:18.199772   11017 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-3013/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-3013/.minikube/files/etc/ssl/certs/42012.pem (1708 bytes)
	I0108 13:25:18.200282   11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 13:25:18.216490   11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 13:25:18.232771   11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 13:25:18.248769   11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 13:25:18.264640   11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 13:25:18.281114   11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 13:25:18.297547   11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 13:25:18.313498   11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 13:25:18.329567   11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/files/etc/ssl/certs/42012.pem --> /usr/share/ca-certificates/42012.pem (1708 bytes)
	I0108 13:25:18.346166   11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 13:25:18.362324   11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/4201.pem --> /usr/share/ca-certificates/4201.pem (1338 bytes)
	I0108 13:25:18.378266   11017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 13:25:18.389409   11017 ssh_runner.go:195] Run: openssl version
	I0108 13:25:18.392867   11017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 13:25:18.399372   11017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:25:18.402423   11017 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:28 /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:25:18.402468   11017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:25:18.405927   11017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 13:25:18.411551   11017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4201.pem && ln -fs /usr/share/ca-certificates/4201.pem /etc/ssl/certs/4201.pem"
	I0108 13:25:18.418019   11017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4201.pem
	I0108 13:25:18.420869   11017 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:32 /usr/share/ca-certificates/4201.pem
	I0108 13:25:18.420912   11017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4201.pem
	I0108 13:25:18.424373   11017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4201.pem /etc/ssl/certs/51391683.0"
	I0108 13:25:18.429820   11017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42012.pem && ln -fs /usr/share/ca-certificates/42012.pem /etc/ssl/certs/42012.pem"
	I0108 13:25:18.436377   11017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42012.pem
	I0108 13:25:18.439330   11017 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:32 /usr/share/ca-certificates/42012.pem
	I0108 13:25:18.439378   11017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42012.pem
	I0108 13:25:18.442990   11017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42012.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 13:25:18.448565   11017 kubeadm.go:396] StartCluster: {Name:pause-132406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.28.0-1673190013-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.25.3 ClusterName:pause-132406 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.27 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:25:18.448668   11017 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 13:25:18.464154   11017 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 13:25:18.470018   11017 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 13:25:18.470031   11017 kubeadm.go:627] restartCluster start
	I0108 13:25:18.470077   11017 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 13:25:18.475709   11017 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:25:18.476155   11017 kubeconfig.go:92] found "pause-132406" server: "https://192.168.64.27:8443"
	I0108 13:25:18.476799   11017 kapi.go:59] client config for pause-132406: &rest.Config{Host:"https://192.168.64.27:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 13:25:18.477292   11017 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 13:25:18.482587   11017 api_server.go:165] Checking apiserver status ...
	I0108 13:25:18.482702   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:25:18.490037   11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:25:18.691070   11017 api_server.go:165] Checking apiserver status ...
	I0108 13:25:18.691265   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:25:18.700943   11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:25:18.891391   11017 api_server.go:165] Checking apiserver status ...
	I0108 13:25:18.891526   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:25:18.900676   11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:25:19.090747   11017 api_server.go:165] Checking apiserver status ...
	I0108 13:25:19.090861   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:25:19.100097   11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:25:19.290255   11017 api_server.go:165] Checking apiserver status ...
	I0108 13:25:19.290417   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:25:19.299741   11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:25:19.491515   11017 api_server.go:165] Checking apiserver status ...
	I0108 13:25:19.491687   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:25:19.501195   11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:25:19.690187   11017 api_server.go:165] Checking apiserver status ...
	I0108 13:25:19.690275   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:25:19.698750   11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:25:19.890980   11017 api_server.go:165] Checking apiserver status ...
	I0108 13:25:19.891134   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:25:19.900481   11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:25:20.092180   11017 api_server.go:165] Checking apiserver status ...
	I0108 13:25:20.092380   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:25:20.101621   11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:25:20.292174   11017 api_server.go:165] Checking apiserver status ...
	I0108 13:25:20.292382   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:25:20.301493   11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:25:20.490195   11017 api_server.go:165] Checking apiserver status ...
	I0108 13:25:20.490362   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:25:20.499831   11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:25:20.690991   11017 api_server.go:165] Checking apiserver status ...
	I0108 13:25:20.691080   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:25:20.707843   11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:25:20.890131   11017 api_server.go:165] Checking apiserver status ...
	I0108 13:25:20.890220   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:25:20.938154   11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:25:21.090723   11017 api_server.go:165] Checking apiserver status ...
	I0108 13:25:21.090847   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:25:21.106274   11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:25:21.290401   11017 api_server.go:165] Checking apiserver status ...
	I0108 13:25:21.290470   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:25:21.312786   11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:25:21.490334   11017 api_server.go:165] Checking apiserver status ...
	I0108 13:25:21.490415   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:25:21.506185   11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:25:21.506197   11017 api_server.go:165] Checking apiserver status ...
	I0108 13:25:21.506257   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:25:21.527352   11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:25:21.527365   11017 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 13:25:21.527375   11017 kubeadm.go:1114] stopping kube-system containers ...
	I0108 13:25:21.527449   11017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 13:25:21.586435   11017 docker.go:444] Stopping containers: [b3ea39090c67 a59e122b43f1 82b65485dbb4 d4f72481538e b17d288e92ab b5535145a6cf 80b9970570ee d0c6f1675c8d f879ee821d6d 592c899764e8 9dd26d98b44d 2fe730d9855f 8bc92a9a48c4 ae84ec1a64cd 5b4f6217121e 77e4c35247cc bded6cef9bbf c2ddc4b3adc5 0a13f0225a4c fb47cc5b476c 11b52ad80c15 9572a2db0191 fa94638cc4fa bf33cb6c0c18 40e688e290e3 685db2b6dfc6 2a8b711bcdda]
	I0108 13:25:21.586530   11017 ssh_runner.go:195] Run: docker stop b3ea39090c67 a59e122b43f1 82b65485dbb4 d4f72481538e b17d288e92ab b5535145a6cf 80b9970570ee d0c6f1675c8d f879ee821d6d 592c899764e8 9dd26d98b44d 2fe730d9855f 8bc92a9a48c4 ae84ec1a64cd 5b4f6217121e 77e4c35247cc bded6cef9bbf c2ddc4b3adc5 0a13f0225a4c fb47cc5b476c 11b52ad80c15 9572a2db0191 fa94638cc4fa bf33cb6c0c18 40e688e290e3 685db2b6dfc6 2a8b711bcdda
	I0108 13:25:22.415742   11017 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 13:25:22.469898   11017 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 13:25:22.477303   11017 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan  8 21:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Jan  8 21:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Jan  8 21:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jan  8 21:24 /etc/kubernetes/scheduler.conf
	
	I0108 13:25:22.477369   11017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 13:25:22.497837   11017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 13:25:22.510243   11017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 13:25:22.519649   11017 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:25:22.519713   11017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 13:25:22.529569   11017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 13:25:22.547255   11017 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:25:22.547314   11017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 13:25:22.556086   11017 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 13:25:22.562336   11017 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 13:25:22.562348   11017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:25:22.629479   11017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:25:23.328667   11017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:25:23.475730   11017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:25:23.517443   11017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:25:23.562971   11017 api_server.go:51] waiting for apiserver process to appear ...
	I0108 13:25:23.563040   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:25:24.078015   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:25:24.576987   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:25:25.077412   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:25:25.089843   11017 api_server.go:71] duration metric: took 1.526868359s to wait for apiserver process to appear ...
	I0108 13:25:25.089860   11017 api_server.go:87] waiting for apiserver healthz status ...
	I0108 13:25:25.089870   11017 api_server.go:252] Checking apiserver healthz at https://192.168.64.27:8443/healthz ...
	I0108 13:25:28.649822   11017 api_server.go:278] https://192.168.64.27:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 13:25:28.649840   11017 api_server.go:102] status: https://192.168.64.27:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 13:25:29.150741   11017 api_server.go:252] Checking apiserver healthz at https://192.168.64.27:8443/healthz ...
	I0108 13:25:29.171836   11017 api_server.go:278] https://192.168.64.27:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 13:25:29.171851   11017 api_server.go:102] status: https://192.168.64.27:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 13:25:29.651373   11017 api_server.go:252] Checking apiserver healthz at https://192.168.64.27:8443/healthz ...
	I0108 13:25:29.656482   11017 api_server.go:278] https://192.168.64.27:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 13:25:29.656499   11017 api_server.go:102] status: https://192.168.64.27:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 13:25:30.150465   11017 api_server.go:252] Checking apiserver healthz at https://192.168.64.27:8443/healthz ...
	I0108 13:25:30.154853   11017 api_server.go:278] https://192.168.64.27:8443/healthz returned 200:
	ok
	I0108 13:25:30.160641   11017 api_server.go:140] control plane version: v1.25.3
	I0108 13:25:30.160657   11017 api_server.go:130] duration metric: took 5.070772061s to wait for apiserver health ...
	I0108 13:25:30.160672   11017 cni.go:95] Creating CNI manager for ""
	I0108 13:25:30.160687   11017 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 13:25:30.160698   11017 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 13:25:30.167315   11017 system_pods.go:59] 6 kube-system pods found
	I0108 13:25:30.167330   11017 system_pods.go:61] "coredns-565d847f94-t2bdb" [4b1d4603-7531-4c5b-b5d1-17f4712c727e] Running
	I0108 13:25:30.167336   11017 system_pods.go:61] "etcd-pause-132406" [69af71f7-0f42-4ea6-98f6-5720512baa84] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 13:25:30.167341   11017 system_pods.go:61] "kube-apiserver-pause-132406" [e8443dca-cdec-4e05-8ae7-d5ed49988ffa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 13:25:30.167347   11017 system_pods.go:61] "kube-controller-manager-pause-132406" [01efd276-f21b-4309-ba40-73d8e0790774] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 13:25:30.167352   11017 system_pods.go:61] "kube-proxy-c2zj2" [06f5a965-c191-491e-a8ca-81e45cdab1e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 13:25:30.167357   11017 system_pods.go:61] "kube-scheduler-pause-132406" [73b60b1b-4f6f-474f-ba27-15a6c1019ffb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 13:25:30.167361   11017 system_pods.go:74] duration metric: took 6.65801ms to wait for pod list to return data ...
	I0108 13:25:30.167367   11017 node_conditions.go:102] verifying NodePressure condition ...
	I0108 13:25:30.173959   11017 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 13:25:30.173980   11017 node_conditions.go:123] node cpu capacity is 2
	I0108 13:25:30.173991   11017 node_conditions.go:105] duration metric: took 6.621227ms to run NodePressure ...
	I0108 13:25:30.174004   11017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:25:30.356109   11017 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0108 13:25:30.359685   11017 kubeadm.go:778] kubelet initialised
	I0108 13:25:30.359697   11017 kubeadm.go:779] duration metric: took 3.573087ms waiting for restarted kubelet to initialise ...
	I0108 13:25:30.359703   11017 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 13:25:30.363076   11017 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-t2bdb" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:30.369851   11017 pod_ready.go:92] pod "coredns-565d847f94-t2bdb" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:30.369861   11017 pod_ready.go:81] duration metric: took 6.774365ms waiting for pod "coredns-565d847f94-t2bdb" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:30.369869   11017 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:32.380838   11017 pod_ready.go:102] pod "etcd-pause-132406" in "kube-system" namespace has status "Ready":"False"
	I0108 13:25:34.381340   11017 pod_ready.go:102] pod "etcd-pause-132406" in "kube-system" namespace has status "Ready":"False"
	I0108 13:25:36.881913   11017 pod_ready.go:102] pod "etcd-pause-132406" in "kube-system" namespace has status "Ready":"False"
	I0108 13:25:38.882255   11017 pod_ready.go:92] pod "etcd-pause-132406" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:38.882271   11017 pod_ready.go:81] duration metric: took 8.51236508s waiting for pod "etcd-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:38.882277   11017 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:40.902617   11017 pod_ready.go:102] pod "kube-apiserver-pause-132406" in "kube-system" namespace has status "Ready":"False"
	I0108 13:25:43.390748   11017 pod_ready.go:92] pod "kube-apiserver-pause-132406" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:43.390761   11017 pod_ready.go:81] duration metric: took 4.508462855s waiting for pod "kube-apiserver-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:43.390767   11017 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:43.400557   11017 pod_ready.go:92] pod "kube-controller-manager-pause-132406" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:43.400568   11017 pod_ready.go:81] duration metric: took 9.796554ms waiting for pod "kube-controller-manager-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:43.400574   11017 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c2zj2" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:43.403156   11017 pod_ready.go:92] pod "kube-proxy-c2zj2" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:43.403166   11017 pod_ready.go:81] duration metric: took 2.587107ms waiting for pod "kube-proxy-c2zj2" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:43.403174   11017 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:44.411451   11017 pod_ready.go:92] pod "kube-scheduler-pause-132406" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:44.411465   11017 pod_ready.go:81] duration metric: took 1.008282022s waiting for pod "kube-scheduler-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:44.411472   11017 pod_ready.go:38] duration metric: took 14.051708866s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 13:25:44.411481   11017 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 13:25:44.418863   11017 ops.go:34] apiserver oom_adj: -16
	I0108 13:25:44.418873   11017 kubeadm.go:631] restartCluster took 25.948745972s
	I0108 13:25:44.418878   11017 kubeadm.go:398] StartCluster complete in 25.970227663s
	I0108 13:25:44.418886   11017 settings.go:142] acquiring lock: {Name:mk8df047e431900506a7782529ec776808797932 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:25:44.418977   11017 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15565-3013/kubeconfig
	I0108 13:25:44.419424   11017 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3013/kubeconfig: {Name:mk12e69a052d3b808fcdcd72ad62f9045d7b154d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:25:44.419963   11017 kapi.go:59] client config for pause-132406: &rest.Config{Host:"https://192.168.64.27:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 13:25:44.421604   11017 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-132406" rescaled to 1
	I0108 13:25:44.421632   11017 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.64.27 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 13:25:44.421642   11017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 13:25:44.421664   11017 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I0108 13:25:44.464718   11017 out.go:177] * Verifying Kubernetes components...
	I0108 13:25:44.464758   11017 addons.go:65] Setting storage-provisioner=true in profile "pause-132406"
	I0108 13:25:44.485523   11017 addons.go:227] Setting addon storage-provisioner=true in "pause-132406"
	I0108 13:25:44.464765   11017 addons.go:65] Setting default-storageclass=true in profile "pause-132406"
	I0108 13:25:44.421794   11017 config.go:180] Loaded profile config "pause-132406": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 13:25:44.475539   11017 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	W0108 13:25:44.485550   11017 addons.go:236] addon storage-provisioner should already be in state true
	I0108 13:25:44.485556   11017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 13:25:44.485552   11017 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-132406"
	I0108 13:25:44.485605   11017 host.go:66] Checking if "pause-132406" exists ...
	I0108 13:25:44.485877   11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:25:44.485890   11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:25:44.485895   11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 13:25:44.485909   11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 13:25:44.493358   11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52925
	I0108 13:25:44.493704   11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52927
	I0108 13:25:44.493766   11017 main.go:134] libmachine: () Calling .GetVersion
	I0108 13:25:44.494093   11017 main.go:134] libmachine: () Calling .GetVersion
	I0108 13:25:44.494097   11017 main.go:134] libmachine: Using API Version  1
	I0108 13:25:44.494108   11017 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 13:25:44.494325   11017 main.go:134] libmachine: () Calling .GetMachineName
	I0108 13:25:44.494420   11017 main.go:134] libmachine: (pause-132406) Calling .GetState
	I0108 13:25:44.494437   11017 main.go:134] libmachine: Using API Version  1
	I0108 13:25:44.494450   11017 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 13:25:44.494517   11017 main.go:134] libmachine: (pause-132406) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 13:25:44.494607   11017 main.go:134] libmachine: (pause-132406) DBG | hyperkit pid from json: 10839
	I0108 13:25:44.494633   11017 main.go:134] libmachine: () Calling .GetMachineName
	I0108 13:25:44.495008   11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:25:44.495031   11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 13:25:44.496752   11017 node_ready.go:35] waiting up to 6m0s for node "pause-132406" to be "Ready" ...
	I0108 13:25:44.497335   11017 kapi.go:59] client config for pause-132406: &rest.Config{Host:"https://192.168.64.27:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 13:25:44.498930   11017 node_ready.go:49] node "pause-132406" has status "Ready":"True"
	I0108 13:25:44.498941   11017 node_ready.go:38] duration metric: took 2.059705ms waiting for node "pause-132406" to be "Ready" ...
	I0108 13:25:44.498947   11017 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 13:25:44.499578   11017 addons.go:227] Setting addon default-storageclass=true in "pause-132406"
	W0108 13:25:44.499589   11017 addons.go:236] addon default-storageclass should already be in state true
	I0108 13:25:44.499606   11017 host.go:66] Checking if "pause-132406" exists ...
	I0108 13:25:44.499869   11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:25:44.499888   11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 13:25:44.502432   11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52929
	I0108 13:25:44.502793   11017 main.go:134] libmachine: () Calling .GetVersion
	I0108 13:25:44.503162   11017 main.go:134] libmachine: Using API Version  1
	I0108 13:25:44.503182   11017 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 13:25:44.503337   11017 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-t2bdb" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:44.503425   11017 main.go:134] libmachine: () Calling .GetMachineName
	I0108 13:25:44.503542   11017 main.go:134] libmachine: (pause-132406) Calling .GetState
	I0108 13:25:44.503638   11017 main.go:134] libmachine: (pause-132406) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 13:25:44.503741   11017 main.go:134] libmachine: (pause-132406) DBG | hyperkit pid from json: 10839
	I0108 13:25:44.505184   11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
	I0108 13:25:44.526650   11017 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 13:25:44.507300   11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52931
	I0108 13:25:44.527054   11017 main.go:134] libmachine: () Calling .GetVersion
	I0108 13:25:44.547766   11017 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 13:25:44.547777   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 13:25:44.547790   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
	I0108 13:25:44.547909   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
	I0108 13:25:44.548050   11017 main.go:134] libmachine: Using API Version  1
	I0108 13:25:44.548063   11017 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 13:25:44.548095   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
	I0108 13:25:44.548190   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
	I0108 13:25:44.548274   11017 main.go:134] libmachine: () Calling .GetMachineName
	I0108 13:25:44.548290   11017 sshutil.go:53] new ssh client: &{IP:192.168.64.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/pause-132406/id_rsa Username:docker}
	I0108 13:25:44.548649   11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:25:44.548675   11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 13:25:44.555825   11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52934
	I0108 13:25:44.556201   11017 main.go:134] libmachine: () Calling .GetVersion
	I0108 13:25:44.556573   11017 main.go:134] libmachine: Using API Version  1
	I0108 13:25:44.556585   11017 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 13:25:44.556785   11017 main.go:134] libmachine: () Calling .GetMachineName
	I0108 13:25:44.556890   11017 main.go:134] libmachine: (pause-132406) Calling .GetState
	I0108 13:25:44.556978   11017 main.go:134] libmachine: (pause-132406) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 13:25:44.557074   11017 main.go:134] libmachine: (pause-132406) DBG | hyperkit pid from json: 10839
	I0108 13:25:44.558022   11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
	I0108 13:25:44.558187   11017 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 13:25:44.558196   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 13:25:44.558205   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
	I0108 13:25:44.558288   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
	I0108 13:25:44.558385   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
	I0108 13:25:44.558470   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
	I0108 13:25:44.558547   11017 sshutil.go:53] new ssh client: &{IP:192.168.64.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/pause-132406/id_rsa Username:docker}
	I0108 13:25:44.587109   11017 pod_ready.go:92] pod "coredns-565d847f94-t2bdb" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:44.587119   11017 pod_ready.go:81] duration metric: took 83.771886ms waiting for pod "coredns-565d847f94-t2bdb" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:44.587128   11017 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:44.599174   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 13:25:44.609018   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 13:25:44.988772   11017 pod_ready.go:92] pod "etcd-pause-132406" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:44.988783   11017 pod_ready.go:81] duration metric: took 401.647771ms waiting for pod "etcd-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:44.988791   11017 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:45.186660   11017 main.go:134] libmachine: Making call to close driver server
	I0108 13:25:45.186678   11017 main.go:134] libmachine: (pause-132406) Calling .Close
	I0108 13:25:45.186841   11017 main.go:134] libmachine: Making call to close driver server
	I0108 13:25:45.186866   11017 main.go:134] libmachine: (pause-132406) Calling .Close
	I0108 13:25:45.186869   11017 main.go:134] libmachine: Successfully made call to close driver server
	I0108 13:25:45.186878   11017 main.go:134] libmachine: (pause-132406) DBG | Closing plugin on server side
	I0108 13:25:45.186883   11017 main.go:134] libmachine: Making call to close connection to plugin binary
	I0108 13:25:45.186898   11017 main.go:134] libmachine: Making call to close driver server
	I0108 13:25:45.186912   11017 main.go:134] libmachine: (pause-132406) Calling .Close
	I0108 13:25:45.187089   11017 main.go:134] libmachine: Successfully made call to close driver server
	I0108 13:25:45.187104   11017 main.go:134] libmachine: Making call to close connection to plugin binary
	I0108 13:25:45.187114   11017 main.go:134] libmachine: (pause-132406) DBG | Closing plugin on server side
	I0108 13:25:45.187131   11017 main.go:134] libmachine: (pause-132406) DBG | Closing plugin on server side
	I0108 13:25:45.187115   11017 main.go:134] libmachine: Successfully made call to close driver server
	I0108 13:25:45.187146   11017 main.go:134] libmachine: Making call to close connection to plugin binary
	I0108 13:25:45.187125   11017 main.go:134] libmachine: Making call to close driver server
	I0108 13:25:45.187159   11017 main.go:134] libmachine: Making call to close driver server
	I0108 13:25:45.187192   11017 main.go:134] libmachine: (pause-132406) Calling .Close
	I0108 13:25:45.187230   11017 main.go:134] libmachine: (pause-132406) Calling .Close
	I0108 13:25:45.187349   11017 main.go:134] libmachine: Successfully made call to close driver server
	I0108 13:25:45.187402   11017 main.go:134] libmachine: (pause-132406) DBG | Closing plugin on server side
	I0108 13:25:45.187401   11017 main.go:134] libmachine: Making call to close connection to plugin binary
	I0108 13:25:45.187428   11017 main.go:134] libmachine: Successfully made call to close driver server
	I0108 13:25:45.187436   11017 main.go:134] libmachine: Making call to close connection to plugin binary
	I0108 13:25:45.245840   11017 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 13:25:45.282957   11017 addons.go:488] enableAddons completed in 861.279533ms
	I0108 13:25:45.388516   11017 pod_ready.go:92] pod "kube-apiserver-pause-132406" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:45.388528   11017 pod_ready.go:81] duration metric: took 399.731294ms waiting for pod "kube-apiserver-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:45.388537   11017 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:45.787890   11017 pod_ready.go:92] pod "kube-controller-manager-pause-132406" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:45.787901   11017 pod_ready.go:81] duration metric: took 399.340179ms waiting for pod "kube-controller-manager-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:45.787908   11017 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c2zj2" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:46.187439   11017 pod_ready.go:92] pod "kube-proxy-c2zj2" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:46.187453   11017 pod_ready.go:81] duration metric: took 399.536729ms waiting for pod "kube-proxy-c2zj2" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:46.187459   11017 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:46.588219   11017 pod_ready.go:92] pod "kube-scheduler-pause-132406" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:46.588232   11017 pod_ready.go:81] duration metric: took 400.763589ms waiting for pod "kube-scheduler-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:46.588239   11017 pod_ready.go:38] duration metric: took 2.0892776s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 13:25:46.588288   11017 api_server.go:51] waiting for apiserver process to appear ...
	I0108 13:25:46.588361   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:25:46.598144   11017 api_server.go:71] duration metric: took 2.176485692s to wait for apiserver process to appear ...
	I0108 13:25:46.598158   11017 api_server.go:87] waiting for apiserver healthz status ...
	I0108 13:25:46.598165   11017 api_server.go:252] Checking apiserver healthz at https://192.168.64.27:8443/healthz ...
	I0108 13:25:46.602085   11017 api_server.go:278] https://192.168.64.27:8443/healthz returned 200:
	ok
	I0108 13:25:46.602639   11017 api_server.go:140] control plane version: v1.25.3
	I0108 13:25:46.602648   11017 api_server.go:130] duration metric: took 4.486281ms to wait for apiserver health ...
	I0108 13:25:46.602654   11017 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 13:25:46.791503   11017 system_pods.go:59] 7 kube-system pods found
	I0108 13:25:46.791521   11017 system_pods.go:61] "coredns-565d847f94-t2bdb" [4b1d4603-7531-4c5b-b5d1-17f4712c727e] Running
	I0108 13:25:46.791526   11017 system_pods.go:61] "etcd-pause-132406" [69af71f7-0f42-4ea6-98f6-5720512baa84] Running
	I0108 13:25:46.791529   11017 system_pods.go:61] "kube-apiserver-pause-132406" [e8443dca-cdec-4e05-8ae7-d5ed49988ffa] Running
	I0108 13:25:46.791533   11017 system_pods.go:61] "kube-controller-manager-pause-132406" [01efd276-f21b-4309-ba40-73d8e0790774] Running
	I0108 13:25:46.791538   11017 system_pods.go:61] "kube-proxy-c2zj2" [06f5a965-c191-491e-a8ca-81e45cdab1e0] Running
	I0108 13:25:46.791542   11017 system_pods.go:61] "kube-scheduler-pause-132406" [73b60b1b-4f6f-474f-ba27-15a6c1019ffb] Running
	I0108 13:25:46.791550   11017 system_pods.go:61] "storage-provisioner" [a4d0a073-64e2-44d3-b701-67c31b2c9dcb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 13:25:46.791556   11017 system_pods.go:74] duration metric: took 188.896938ms to wait for pod list to return data ...
	I0108 13:25:46.791561   11017 default_sa.go:34] waiting for default service account to be created ...
	I0108 13:25:46.988179   11017 default_sa.go:45] found service account: "default"
	I0108 13:25:46.988192   11017 default_sa.go:55] duration metric: took 196.618556ms for default service account to be created ...
	I0108 13:25:46.988197   11017 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 13:25:47.191037   11017 system_pods.go:86] 7 kube-system pods found
	I0108 13:25:47.191051   11017 system_pods.go:89] "coredns-565d847f94-t2bdb" [4b1d4603-7531-4c5b-b5d1-17f4712c727e] Running
	I0108 13:25:47.191056   11017 system_pods.go:89] "etcd-pause-132406" [69af71f7-0f42-4ea6-98f6-5720512baa84] Running
	I0108 13:25:47.191059   11017 system_pods.go:89] "kube-apiserver-pause-132406" [e8443dca-cdec-4e05-8ae7-d5ed49988ffa] Running
	I0108 13:25:47.191062   11017 system_pods.go:89] "kube-controller-manager-pause-132406" [01efd276-f21b-4309-ba40-73d8e0790774] Running
	I0108 13:25:47.191068   11017 system_pods.go:89] "kube-proxy-c2zj2" [06f5a965-c191-491e-a8ca-81e45cdab1e0] Running
	I0108 13:25:47.191071   11017 system_pods.go:89] "kube-scheduler-pause-132406" [73b60b1b-4f6f-474f-ba27-15a6c1019ffb] Running
	I0108 13:25:47.191075   11017 system_pods.go:89] "storage-provisioner" [a4d0a073-64e2-44d3-b701-67c31b2c9dcb] Running
	I0108 13:25:47.191079   11017 system_pods.go:126] duration metric: took 202.877582ms to wait for k8s-apps to be running ...
	I0108 13:25:47.191083   11017 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 13:25:47.191143   11017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 13:25:47.200849   11017 system_svc.go:56] duration metric: took 9.761745ms WaitForService to wait for kubelet.
	I0108 13:25:47.200862   11017 kubeadm.go:573] duration metric: took 2.779206372s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 13:25:47.200873   11017 node_conditions.go:102] verifying NodePressure condition ...
	I0108 13:25:47.388983   11017 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 13:25:47.388998   11017 node_conditions.go:123] node cpu capacity is 2
	I0108 13:25:47.389006   11017 node_conditions.go:105] duration metric: took 188.128513ms to run NodePressure ...
	I0108 13:25:47.389012   11017 start.go:217] waiting for startup goroutines ...
	I0108 13:25:47.389347   11017 ssh_runner.go:195] Run: rm -f paused
	I0108 13:25:47.433718   11017 start.go:536] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
	I0108 13:25:47.476634   11017 out.go:177] * Done! kubectl is now configured to use "pause-132406" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-132406 -n pause-132406
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p pause-132406 logs -n 25
E0108 13:25:50.418521    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p pause-132406 logs -n 25: (2.859341467s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p force-systemd-flag-131733          | force-systemd-flag-131733 | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:18 PST |
	| start   | -p cert-expiration-131814             | cert-expiration-131814    | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:18 PST |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| ssh     | docker-flags-131736 ssh               | docker-flags-131736       | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:18 PST |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-131736 ssh               | docker-flags-131736       | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:18 PST |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-131736                | docker-flags-131736       | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:18 PST |
	| start   | -p cert-options-131823                | cert-options-131823       | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:19 PST |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| ssh     | cert-options-131823 ssh               | cert-options-131823       | jenkins | v1.28.0 | 08 Jan 23 13:19 PST | 08 Jan 23 13:19 PST |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-131823 -- sudo        | cert-options-131823       | jenkins | v1.28.0 | 08 Jan 23 13:19 PST | 08 Jan 23 13:19 PST |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-131823                | cert-options-131823       | jenkins | v1.28.0 | 08 Jan 23 13:19 PST | 08 Jan 23 13:19 PST |
	| start   | -p running-upgrade-131911             | running-upgrade-131911    | jenkins | v1.28.0 | 08 Jan 23 13:20 PST | 08 Jan 23 13:21 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-131911             | running-upgrade-131911    | jenkins | v1.28.0 | 08 Jan 23 13:21 PST | 08 Jan 23 13:21 PST |
	| start   | -p kubernetes-upgrade-132147          | kubernetes-upgrade-132147 | jenkins | v1.28.0 | 08 Jan 23 13:21 PST | 08 Jan 23 13:22 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| start   | -p cert-expiration-131814             | cert-expiration-131814    | jenkins | v1.28.0 | 08 Jan 23 13:21 PST | 08 Jan 23 13:22 PST |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-131814             | cert-expiration-131814    | jenkins | v1.28.0 | 08 Jan 23 13:22 PST | 08 Jan 23 13:22 PST |
	| stop    | -p kubernetes-upgrade-132147          | kubernetes-upgrade-132147 | jenkins | v1.28.0 | 08 Jan 23 13:22 PST | 08 Jan 23 13:23 PST |
	| start   | -p kubernetes-upgrade-132147          | kubernetes-upgrade-132147 | jenkins | v1.28.0 | 08 Jan 23 13:23 PST | 08 Jan 23 13:23 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-132147          | kubernetes-upgrade-132147 | jenkins | v1.28.0 | 08 Jan 23 13:23 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-132147          | kubernetes-upgrade-132147 | jenkins | v1.28.0 | 08 Jan 23 13:23 PST | 08 Jan 23 13:24 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-132147          | kubernetes-upgrade-132147 | jenkins | v1.28.0 | 08 Jan 23 13:24 PST | 08 Jan 23 13:24 PST |
	| start   | -p pause-132406 --memory=2048         | pause-132406              | jenkins | v1.28.0 | 08 Jan 23 13:24 PST | 08 Jan 23 13:24 PST |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=hyperkit          |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-132230             | stopped-upgrade-132230    | jenkins | v1.28.0 | 08 Jan 23 13:24 PST | 08 Jan 23 13:25 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| start   | -p pause-132406                       | pause-132406              | jenkins | v1.28.0 | 08 Jan 23 13:24 PST | 08 Jan 23 13:25 PST |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-132230             | stopped-upgrade-132230    | jenkins | v1.28.0 | 08 Jan 23 13:25 PST | 08 Jan 23 13:25 PST |
	| start   | -p NoKubernetes-132541                | NoKubernetes-132541       | jenkins | v1.28.0 | 08 Jan 23 13:25 PST |                     |
	|         | --no-kubernetes                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20             |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-132541                | NoKubernetes-132541       | jenkins | v1.28.0 | 08 Jan 23 13:25 PST |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 13:25:41
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 13:25:41.864570   11086 out.go:296] Setting OutFile to fd 1 ...
	I0108 13:25:41.864753   11086 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 13:25:41.864756   11086 out.go:309] Setting ErrFile to fd 2...
	I0108 13:25:41.864759   11086 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 13:25:41.864887   11086 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3013/.minikube/bin
	I0108 13:25:41.865398   11086 out.go:303] Setting JSON to false
	I0108 13:25:41.884483   11086 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5115,"bootTime":1673208026,"procs":427,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0108 13:25:41.884582   11086 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0108 13:25:41.906843   11086 out.go:177] * [NoKubernetes-132541] minikube v1.28.0 on Darwin 13.0.1
	I0108 13:25:41.948550   11086 notify.go:220] Checking for updates...
	I0108 13:25:41.970851   11086 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 13:25:41.992551   11086 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3013/kubeconfig
	I0108 13:25:42.013641   11086 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 13:25:42.034777   11086 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 13:25:42.056742   11086 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3013/.minikube
	I0108 13:25:42.079417   11086 config.go:180] Loaded profile config "pause-132406": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 13:25:42.079463   11086 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 13:25:42.107801   11086 out.go:177] * Using the hyperkit driver based on user configuration
	I0108 13:25:42.149620   11086 start.go:294] selected driver: hyperkit
	I0108 13:25:42.149635   11086 start.go:838] validating driver "hyperkit" against <nil>
	I0108 13:25:42.149659   11086 start.go:849] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 13:25:42.149782   11086 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 13:25:42.150003   11086 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15565-3013/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0108 13:25:42.158308   11086 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.28.0
	I0108 13:25:42.161836   11086 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:25:42.161854   11086 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0108 13:25:42.161898   11086 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I0108 13:25:42.164327   11086 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0108 13:25:42.164430   11086 start_flags.go:892] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 13:25:42.164452   11086 cni.go:95] Creating CNI manager for ""
	I0108 13:25:42.164459   11086 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 13:25:42.164468   11086 start_flags.go:317] config:
	{Name:NoKubernetes-132541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:NoKubernetes-132541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:25:42.164581   11086 iso.go:125] acquiring lock: {Name:mk509bccdb22b8c95ebe7c0f784c1151265efda4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 13:25:42.222410   11086 out.go:177] * Starting control plane node NoKubernetes-132541 in cluster NoKubernetes-132541
	I0108 13:25:42.259872   11086 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 13:25:42.260043   11086 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-3013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0108 13:25:42.260082   11086 cache.go:57] Caching tarball of preloaded images
	I0108 13:25:42.260294   11086 preload.go:174] Found /Users/jenkins/minikube-integration/15565-3013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 13:25:42.260312   11086 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0108 13:25:42.260462   11086 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/NoKubernetes-132541/config.json ...
	I0108 13:25:42.260510   11086 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/NoKubernetes-132541/config.json: {Name:mkb313010fa03f74b48c17380336d5ac233d014a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:25:42.261062   11086 cache.go:193] Successfully downloaded all kic artifacts
	I0108 13:25:42.261110   11086 start.go:364] acquiring machines lock for NoKubernetes-132541: {Name:mk29e5f49e96ee5817a491da62b8738aae3fb506 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 13:25:42.261283   11086 start.go:368] acquired machines lock for "NoKubernetes-132541" in 157.235µs
	I0108 13:25:42.261346   11086 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-132541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.28.0-1673190013-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.25.3 ClusterName:NoKubernetes-132541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 13:25:42.261435   11086 start.go:125] createHost starting for "" (driver="hyperkit")
	I0108 13:25:40.902617   11017 pod_ready.go:102] pod "kube-apiserver-pause-132406" in "kube-system" namespace has status "Ready":"False"
	I0108 13:25:43.390748   11017 pod_ready.go:92] pod "kube-apiserver-pause-132406" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:43.390761   11017 pod_ready.go:81] duration metric: took 4.508462855s waiting for pod "kube-apiserver-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:43.390767   11017 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:43.400557   11017 pod_ready.go:92] pod "kube-controller-manager-pause-132406" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:43.400568   11017 pod_ready.go:81] duration metric: took 9.796554ms waiting for pod "kube-controller-manager-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:43.400574   11017 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c2zj2" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:43.403156   11017 pod_ready.go:92] pod "kube-proxy-c2zj2" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:43.403166   11017 pod_ready.go:81] duration metric: took 2.587107ms waiting for pod "kube-proxy-c2zj2" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:43.403174   11017 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:44.411451   11017 pod_ready.go:92] pod "kube-scheduler-pause-132406" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:44.411465   11017 pod_ready.go:81] duration metric: took 1.008282022s waiting for pod "kube-scheduler-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:44.411472   11017 pod_ready.go:38] duration metric: took 14.051708866s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 13:25:44.411481   11017 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 13:25:44.418863   11017 ops.go:34] apiserver oom_adj: -16
	I0108 13:25:44.418873   11017 kubeadm.go:631] restartCluster took 25.948745972s
	I0108 13:25:44.418878   11017 kubeadm.go:398] StartCluster complete in 25.970227663s
	I0108 13:25:44.418886   11017 settings.go:142] acquiring lock: {Name:mk8df047e431900506a7782529ec776808797932 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:25:44.418977   11017 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15565-3013/kubeconfig
	I0108 13:25:44.419424   11017 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3013/kubeconfig: {Name:mk12e69a052d3b808fcdcd72ad62f9045d7b154d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:25:44.419963   11017 kapi.go:59] client config for pause-132406: &rest.Config{Host:"https://192.168.64.27:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 13:25:44.421604   11017 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-132406" rescaled to 1
	I0108 13:25:44.421632   11017 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.64.27 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 13:25:44.421642   11017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 13:25:44.421664   11017 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I0108 13:25:44.464718   11017 out.go:177] * Verifying Kubernetes components...
	I0108 13:25:44.464758   11017 addons.go:65] Setting storage-provisioner=true in profile "pause-132406"
	I0108 13:25:44.485523   11017 addons.go:227] Setting addon storage-provisioner=true in "pause-132406"
	I0108 13:25:44.464765   11017 addons.go:65] Setting default-storageclass=true in profile "pause-132406"
	I0108 13:25:44.421794   11017 config.go:180] Loaded profile config "pause-132406": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 13:25:44.475539   11017 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	W0108 13:25:44.485550   11017 addons.go:236] addon storage-provisioner should already be in state true
	I0108 13:25:44.485556   11017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 13:25:44.485552   11017 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-132406"
	I0108 13:25:44.485605   11017 host.go:66] Checking if "pause-132406" exists ...
	I0108 13:25:44.485877   11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:25:44.485890   11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:25:44.485895   11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 13:25:44.485909   11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 13:25:44.493358   11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52925
	I0108 13:25:44.493704   11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52927
	I0108 13:25:44.493766   11017 main.go:134] libmachine: () Calling .GetVersion
	I0108 13:25:44.494093   11017 main.go:134] libmachine: () Calling .GetVersion
	I0108 13:25:44.494097   11017 main.go:134] libmachine: Using API Version  1
	I0108 13:25:44.494108   11017 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 13:25:44.494325   11017 main.go:134] libmachine: () Calling .GetMachineName
	I0108 13:25:44.494420   11017 main.go:134] libmachine: (pause-132406) Calling .GetState
	I0108 13:25:44.494437   11017 main.go:134] libmachine: Using API Version  1
	I0108 13:25:44.494450   11017 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 13:25:44.494517   11017 main.go:134] libmachine: (pause-132406) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 13:25:44.494607   11017 main.go:134] libmachine: (pause-132406) DBG | hyperkit pid from json: 10839
	I0108 13:25:44.494633   11017 main.go:134] libmachine: () Calling .GetMachineName
	I0108 13:25:44.495008   11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:25:44.495031   11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 13:25:44.496752   11017 node_ready.go:35] waiting up to 6m0s for node "pause-132406" to be "Ready" ...
	I0108 13:25:44.497335   11017 kapi.go:59] client config for pause-132406: &rest.Config{Host:"https://192.168.64.27:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 13:25:44.498930   11017 node_ready.go:49] node "pause-132406" has status "Ready":"True"
	I0108 13:25:44.498941   11017 node_ready.go:38] duration metric: took 2.059705ms waiting for node "pause-132406" to be "Ready" ...
	I0108 13:25:44.498947   11017 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 13:25:44.499578   11017 addons.go:227] Setting addon default-storageclass=true in "pause-132406"
	W0108 13:25:44.499589   11017 addons.go:236] addon default-storageclass should already be in state true
	I0108 13:25:44.499606   11017 host.go:66] Checking if "pause-132406" exists ...
	I0108 13:25:44.499869   11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:25:44.499888   11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 13:25:44.502432   11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52929
	I0108 13:25:44.502793   11017 main.go:134] libmachine: () Calling .GetVersion
	I0108 13:25:44.503162   11017 main.go:134] libmachine: Using API Version  1
	I0108 13:25:44.503182   11017 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 13:25:44.503337   11017 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-t2bdb" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:44.503425   11017 main.go:134] libmachine: () Calling .GetMachineName
	I0108 13:25:44.503542   11017 main.go:134] libmachine: (pause-132406) Calling .GetState
	I0108 13:25:44.503638   11017 main.go:134] libmachine: (pause-132406) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 13:25:44.503741   11017 main.go:134] libmachine: (pause-132406) DBG | hyperkit pid from json: 10839
	I0108 13:25:44.505184   11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
	I0108 13:25:44.526650   11017 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 13:25:44.507300   11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52931
	I0108 13:25:44.527054   11017 main.go:134] libmachine: () Calling .GetVersion
	I0108 13:25:44.547766   11017 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 13:25:44.547777   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 13:25:44.547790   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
	I0108 13:25:44.547909   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
	I0108 13:25:44.548050   11017 main.go:134] libmachine: Using API Version  1
	I0108 13:25:44.548063   11017 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 13:25:44.548095   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
	I0108 13:25:44.548190   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
	I0108 13:25:44.548274   11017 main.go:134] libmachine: () Calling .GetMachineName
	I0108 13:25:44.548290   11017 sshutil.go:53] new ssh client: &{IP:192.168.64.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/pause-132406/id_rsa Username:docker}
	I0108 13:25:44.548649   11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:25:44.548675   11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 13:25:44.555825   11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52934
	I0108 13:25:44.556201   11017 main.go:134] libmachine: () Calling .GetVersion
	I0108 13:25:44.556573   11017 main.go:134] libmachine: Using API Version  1
	I0108 13:25:44.556585   11017 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 13:25:44.556785   11017 main.go:134] libmachine: () Calling .GetMachineName
	I0108 13:25:44.556890   11017 main.go:134] libmachine: (pause-132406) Calling .GetState
	I0108 13:25:44.556978   11017 main.go:134] libmachine: (pause-132406) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 13:25:44.557074   11017 main.go:134] libmachine: (pause-132406) DBG | hyperkit pid from json: 10839
	I0108 13:25:44.558022   11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
	I0108 13:25:44.558187   11017 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 13:25:44.558196   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 13:25:44.558205   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
	I0108 13:25:44.558288   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
	I0108 13:25:44.558385   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
	I0108 13:25:44.558470   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
	I0108 13:25:44.558547   11017 sshutil.go:53] new ssh client: &{IP:192.168.64.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/pause-132406/id_rsa Username:docker}
	I0108 13:25:44.587109   11017 pod_ready.go:92] pod "coredns-565d847f94-t2bdb" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:44.587119   11017 pod_ready.go:81] duration metric: took 83.771886ms waiting for pod "coredns-565d847f94-t2bdb" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:44.587128   11017 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:44.599174   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 13:25:44.609018   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 13:25:44.988772   11017 pod_ready.go:92] pod "etcd-pause-132406" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:44.988783   11017 pod_ready.go:81] duration metric: took 401.647771ms waiting for pod "etcd-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:44.988791   11017 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:45.186660   11017 main.go:134] libmachine: Making call to close driver server
	I0108 13:25:45.186678   11017 main.go:134] libmachine: (pause-132406) Calling .Close
	I0108 13:25:45.186841   11017 main.go:134] libmachine: Making call to close driver server
	I0108 13:25:45.186866   11017 main.go:134] libmachine: (pause-132406) Calling .Close
	I0108 13:25:45.186869   11017 main.go:134] libmachine: Successfully made call to close driver server
	I0108 13:25:45.186878   11017 main.go:134] libmachine: (pause-132406) DBG | Closing plugin on server side
	I0108 13:25:45.186883   11017 main.go:134] libmachine: Making call to close connection to plugin binary
	I0108 13:25:45.186898   11017 main.go:134] libmachine: Making call to close driver server
	I0108 13:25:45.186912   11017 main.go:134] libmachine: (pause-132406) Calling .Close
	I0108 13:25:45.187089   11017 main.go:134] libmachine: Successfully made call to close driver server
	I0108 13:25:45.187104   11017 main.go:134] libmachine: Making call to close connection to plugin binary
	I0108 13:25:45.187114   11017 main.go:134] libmachine: (pause-132406) DBG | Closing plugin on server side
	I0108 13:25:45.187131   11017 main.go:134] libmachine: (pause-132406) DBG | Closing plugin on server side
	I0108 13:25:45.187115   11017 main.go:134] libmachine: Successfully made call to close driver server
	I0108 13:25:45.187146   11017 main.go:134] libmachine: Making call to close connection to plugin binary
	I0108 13:25:45.187125   11017 main.go:134] libmachine: Making call to close driver server
	I0108 13:25:45.187159   11017 main.go:134] libmachine: Making call to close driver server
	I0108 13:25:45.187192   11017 main.go:134] libmachine: (pause-132406) Calling .Close
	I0108 13:25:45.187230   11017 main.go:134] libmachine: (pause-132406) Calling .Close
	I0108 13:25:45.187349   11017 main.go:134] libmachine: Successfully made call to close driver server
	I0108 13:25:45.187402   11017 main.go:134] libmachine: (pause-132406) DBG | Closing plugin on server side
	I0108 13:25:45.187401   11017 main.go:134] libmachine: Making call to close connection to plugin binary
	I0108 13:25:45.187428   11017 main.go:134] libmachine: Successfully made call to close driver server
	I0108 13:25:45.187436   11017 main.go:134] libmachine: Making call to close connection to plugin binary
	I0108 13:25:45.245840   11017 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 13:25:42.303634   11086 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I0108 13:25:42.304124   11086 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:25:42.304196   11086 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 13:25:42.312348   11086 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52923
	I0108 13:25:42.312703   11086 main.go:134] libmachine: () Calling .GetVersion
	I0108 13:25:42.313112   11086 main.go:134] libmachine: Using API Version  1
	I0108 13:25:42.313119   11086 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 13:25:42.313353   11086 main.go:134] libmachine: () Calling .GetMachineName
	I0108 13:25:42.313462   11086 main.go:134] libmachine: (NoKubernetes-132541) Calling .GetMachineName
	I0108 13:25:42.313550   11086 main.go:134] libmachine: (NoKubernetes-132541) Calling .DriverName
	I0108 13:25:42.313649   11086 start.go:159] libmachine.API.Create for "NoKubernetes-132541" (driver="hyperkit")
	I0108 13:25:42.313676   11086 client.go:168] LocalClient.Create starting
	I0108 13:25:42.313716   11086 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/ca.pem
	I0108 13:25:42.313761   11086 main.go:134] libmachine: Decoding PEM data...
	I0108 13:25:42.313774   11086 main.go:134] libmachine: Parsing certificate...
	I0108 13:25:42.313838   11086 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/cert.pem
	I0108 13:25:42.313868   11086 main.go:134] libmachine: Decoding PEM data...
	I0108 13:25:42.313878   11086 main.go:134] libmachine: Parsing certificate...
	I0108 13:25:42.313894   11086 main.go:134] libmachine: Running pre-create checks...
	I0108 13:25:42.313905   11086 main.go:134] libmachine: (NoKubernetes-132541) Calling .PreCreateCheck
	I0108 13:25:42.313976   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 13:25:42.314125   11086 main.go:134] libmachine: (NoKubernetes-132541) Calling .GetConfigRaw
	I0108 13:25:42.314532   11086 main.go:134] libmachine: Creating machine...
	I0108 13:25:42.314538   11086 main.go:134] libmachine: (NoKubernetes-132541) Calling .Create
	I0108 13:25:42.314603   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 13:25:42.314731   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | I0108 13:25:42.314597   11094 common.go:116] Making disk image using store path: /Users/jenkins/minikube-integration/15565-3013/.minikube
	I0108 13:25:42.314792   11086 main.go:134] libmachine: (NoKubernetes-132541) Downloading /Users/jenkins/minikube-integration/15565-3013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15565-3013/.minikube/cache/iso/amd64/minikube-v1.28.0-1673190013-15565-amd64.iso...
	I0108 13:25:42.460398   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | I0108 13:25:42.460335   11094 common.go:123] Creating ssh key: /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/id_rsa...
	I0108 13:25:42.503141   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | I0108 13:25:42.503046   11094 common.go:129] Creating raw disk image: /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/NoKubernetes-132541.rawdisk...
	I0108 13:25:42.503157   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Writing magic tar header
	I0108 13:25:42.503171   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Writing SSH key tar header
	I0108 13:25:42.503539   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | I0108 13:25:42.503489   11094 common.go:143] Fixing permissions on /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541 ...
	I0108 13:25:42.653730   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 13:25:42.653744   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/hyperkit.pid
	I0108 13:25:42.653784   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Using UUID 014f6508-8f9b-11ed-91e7-149d997fca88
	I0108 13:25:42.678089   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Generated MAC 4e:f0:b3:1f:f:2b
	I0108 13:25:42.678104   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=NoKubernetes-132541
	I0108 13:25:42.678131   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"014f6508-8f9b-11ed-91e7-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000250e70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/bzimage", Initrd:"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0108 13:25:42.678168   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"014f6508-8f9b-11ed-91e7-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000250e70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/bzimage", Initrd:"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0108 13:25:42.678217   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/hyperkit.pid", "-c", "2", "-m", "6000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "014f6508-8f9b-11ed-91e7-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/NoKubernetes-132541.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/tty,log=/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/bzimage,/Users/jenkins/m
inikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=NoKubernetes-132541"}
	I0108 13:25:42.678253   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/hyperkit.pid -c 2 -m 6000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 014f6508-8f9b-11ed-91e7-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/NoKubernetes-132541.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/tty,log=/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/console-ring -f kexec,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/bzimage,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes
-132541/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=NoKubernetes-132541"
	I0108 13:25:42.678262   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0108 13:25:42.679546   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 DEBUG: hyperkit: Pid is 11097
	I0108 13:25:42.679869   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Attempt 0
	I0108 13:25:42.679885   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 13:25:42.679938   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | hyperkit pid from json: 11097
	I0108 13:25:42.680931   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Searching for 4e:f0:b3:1f:f:2b in /var/db/dhcpd_leases ...
	I0108 13:25:42.681019   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0108 13:25:42.681065   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:da:4c:f9:c0:83:47 ID:1,da:4c:f9:c0:83:47 Lease:0x63bc8612}
	I0108 13:25:42.681091   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:a2:44:36:6b:68:b8 ID:1,a2:44:36:6b:68:b8 Lease:0x63bc85ff}
	I0108 13:25:42.681107   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:9a:64:4e:b9:b9:44 ID:1,9a:64:4e:b9:b9:44 Lease:0x63bb3475}
	I0108 13:25:42.681116   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:ba:cc:22:10:41:cb ID:1,ba:cc:22:10:41:cb Lease:0x63bc84e7}
	I0108 13:25:42.681130   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:3e:f7:d1:11:f9:61 ID:1,3e:f7:d1:11:f9:61 Lease:0x63bb334e}
	I0108 13:25:42.681144   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:1a:20:c6:f:e0:2d ID:1,1a:20:c6:f:e0:2d Lease:0x63bc849e}
	I0108 13:25:42.681154   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:7e:2d:4c:da:5f:85 ID:1,7e:2d:4c:da:5f:85 Lease:0x63bb331e}
	I0108 13:25:42.681167   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:ea:2c:fd:1b:d6:7 ID:1,ea:2c:fd:1b:d6:7 Lease:0x63bb3315}
	I0108 13:25:42.681181   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:ea:6f:3b:d4:62:ae ID:1,ea:6f:3b:d4:62:ae Lease:0x63bc8447}
	I0108 13:25:42.681193   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:da:e6:bc:d0:c8:f2 ID:1,da:e6:bc:d0:c8:f2 Lease:0x63bc8436}
	I0108 13:25:42.681203   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:e6:e4:fb:59:30:7a ID:1,e6:e4:fb:59:30:7a Lease:0x63bb32ac}
	I0108 13:25:42.681218   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:56:6e:42:af:88:21 ID:1,56:6e:42:af:88:21 Lease:0x63bc837e}
	I0108 13:25:42.681228   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:fa:f:28:59:92:81 ID:1,fa:f:28:59:92:81 Lease:0x63bc82ef}
	I0108 13:25:42.681241   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:56:aa:6a:b7:76:a0 ID:1,56:aa:6a:b7:76:a0 Lease:0x63bc82be}
	I0108 13:25:42.681253   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ae:fc:4d:f4:df:e0 ID:1,ae:fc:4d:f4:df:e0 Lease:0x63bb2f04}
	I0108 13:25:42.681264   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:be:f1:a2:69:d0:dc ID:1,be:f1:a2:69:d0:dc Lease:0x63bb3166}
	I0108 13:25:42.681275   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:ce:11:55:19:1b:bc ID:1,ce:11:55:19:1b:bc Lease:0x63bb3164}
	I0108 13:25:42.681297   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:1a:b6:29:53:dd:44 ID:1,1a:b6:29:53:dd:44 Lease:0x63bb2ae0}
	I0108 13:25:42.681310   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:ea:1b:94:31:e9:2c ID:1,ea:1b:94:31:e9:2c Lease:0x63bb2acb}
	I0108 13:25:42.681320   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:7e:95:4e:60:39:38 ID:1,7e:95:4e:60:39:38 Lease:0x63bb2aa5}
	I0108 13:25:42.681333   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:56:7:65:39:b8:f4 ID:1,56:7:65:39:b8:f4 Lease:0x63bc7bd7}
	I0108 13:25:42.681343   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:a:47:60:13:a:a6 ID:1,a:47:60:13:a:a6 Lease:0x63bc7b95}
	I0108 13:25:42.681355   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:96:54:b2:b:96:5a ID:1,96:54:b2:b:96:5a Lease:0x63bb2a0b}
	I0108 13:25:42.681370   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:22:33:31:80:e5:53 ID:1,22:33:31:80:e5:53 Lease:0x63bc79dc}
	I0108 13:25:42.681383   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:c6:e3:59:ac:dc:8f ID:1,c6:e3:59:ac:dc:8f Lease:0x63bb2851}
	I0108 13:25:42.681398   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:4a:1c:4:a4:25:f5 ID:1,4a:1c:4:a4:25:f5 Lease:0x63bc78c4}
	I0108 13:25:42.686391   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0108 13:25:42.695558   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0108 13:25:42.696153   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0108 13:25:42.696174   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0108 13:25:42.696188   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0108 13:25:42.696199   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0108 13:25:43.257820   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0108 13:25:43.257832   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0108 13:25:43.362873   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0108 13:25:43.362883   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0108 13:25:43.362890   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0108 13:25:43.362899   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0108 13:25:43.363783   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0108 13:25:43.363789   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0108 13:25:44.682748   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Attempt 1
	I0108 13:25:44.682757   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 13:25:44.682837   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | hyperkit pid from json: 11097
	I0108 13:25:44.684384   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Searching for 4e:f0:b3:1f:f:2b in /var/db/dhcpd_leases ...
	I0108 13:25:44.684451   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0108 13:25:44.684458   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:da:4c:f9:c0:83:47 ID:1,da:4c:f9:c0:83:47 Lease:0x63bc8612}
	I0108 13:25:44.684475   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:a2:44:36:6b:68:b8 ID:1,a2:44:36:6b:68:b8 Lease:0x63bc85ff}
	I0108 13:25:44.684481   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:9a:64:4e:b9:b9:44 ID:1,9a:64:4e:b9:b9:44 Lease:0x63bb3475}
	I0108 13:25:44.684487   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:ba:cc:22:10:41:cb ID:1,ba:cc:22:10:41:cb Lease:0x63bc84e7}
	I0108 13:25:44.684492   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:3e:f7:d1:11:f9:61 ID:1,3e:f7:d1:11:f9:61 Lease:0x63bb334e}
	I0108 13:25:44.684504   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:1a:20:c6:f:e0:2d ID:1,1a:20:c6:f:e0:2d Lease:0x63bc849e}
	I0108 13:25:44.684509   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:7e:2d:4c:da:5f:85 ID:1,7e:2d:4c:da:5f:85 Lease:0x63bb331e}
	I0108 13:25:44.684516   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:ea:2c:fd:1b:d6:7 ID:1,ea:2c:fd:1b:d6:7 Lease:0x63bb3315}
	I0108 13:25:44.684521   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:ea:6f:3b:d4:62:ae ID:1,ea:6f:3b:d4:62:ae Lease:0x63bc8447}
	I0108 13:25:44.684527   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:da:e6:bc:d0:c8:f2 ID:1,da:e6:bc:d0:c8:f2 Lease:0x63bc8436}
	I0108 13:25:44.684533   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:e6:e4:fb:59:30:7a ID:1,e6:e4:fb:59:30:7a Lease:0x63bb32ac}
	I0108 13:25:44.684541   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:56:6e:42:af:88:21 ID:1,56:6e:42:af:88:21 Lease:0x63bc837e}
	I0108 13:25:44.684548   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:fa:f:28:59:92:81 ID:1,fa:f:28:59:92:81 Lease:0x63bc82ef}
	I0108 13:25:44.684554   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:56:aa:6a:b7:76:a0 ID:1,56:aa:6a:b7:76:a0 Lease:0x63bc82be}
	I0108 13:25:44.684559   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ae:fc:4d:f4:df:e0 ID:1,ae:fc:4d:f4:df:e0 Lease:0x63bb2f04}
	I0108 13:25:44.684578   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:be:f1:a2:69:d0:dc ID:1,be:f1:a2:69:d0:dc Lease:0x63bb3166}
	I0108 13:25:44.684588   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:ce:11:55:19:1b:bc ID:1,ce:11:55:19:1b:bc Lease:0x63bb3164}
	I0108 13:25:44.684597   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:1a:b6:29:53:dd:44 ID:1,1a:b6:29:53:dd:44 Lease:0x63bb2ae0}
	I0108 13:25:44.684604   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:ea:1b:94:31:e9:2c ID:1,ea:1b:94:31:e9:2c Lease:0x63bb2acb}
	I0108 13:25:44.684610   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:7e:95:4e:60:39:38 ID:1,7e:95:4e:60:39:38 Lease:0x63bb2aa5}
	I0108 13:25:44.684619   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:56:7:65:39:b8:f4 ID:1,56:7:65:39:b8:f4 Lease:0x63bc7bd7}
	I0108 13:25:44.684625   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:a:47:60:13:a:a6 ID:1,a:47:60:13:a:a6 Lease:0x63bc7b95}
	I0108 13:25:44.684632   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:96:54:b2:b:96:5a ID:1,96:54:b2:b:96:5a Lease:0x63bb2a0b}
	I0108 13:25:44.684637   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:22:33:31:80:e5:53 ID:1,22:33:31:80:e5:53 Lease:0x63bc79dc}
	I0108 13:25:44.684642   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:c6:e3:59:ac:dc:8f ID:1,c6:e3:59:ac:dc:8f Lease:0x63bb2851}
	I0108 13:25:44.684651   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:4a:1c:4:a4:25:f5 ID:1,4a:1c:4:a4:25:f5 Lease:0x63bc78c4}
	I0108 13:25:46.686543   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Attempt 2
	I0108 13:25:46.686557   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 13:25:46.686628   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | hyperkit pid from json: 11097
	I0108 13:25:46.687410   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Searching for 4e:f0:b3:1f:f:2b in /var/db/dhcpd_leases ...
	I0108 13:25:46.687458   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0108 13:25:46.687466   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:da:4c:f9:c0:83:47 ID:1,da:4c:f9:c0:83:47 Lease:0x63bc8612}
	I0108 13:25:46.687481   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:a2:44:36:6b:68:b8 ID:1,a2:44:36:6b:68:b8 Lease:0x63bc85ff}
	I0108 13:25:46.687487   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:9a:64:4e:b9:b9:44 ID:1,9a:64:4e:b9:b9:44 Lease:0x63bb3475}
	I0108 13:25:46.687494   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:ba:cc:22:10:41:cb ID:1,ba:cc:22:10:41:cb Lease:0x63bc84e7}
	I0108 13:25:46.687499   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:3e:f7:d1:11:f9:61 ID:1,3e:f7:d1:11:f9:61 Lease:0x63bb334e}
	I0108 13:25:46.687509   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:1a:20:c6:f:e0:2d ID:1,1a:20:c6:f:e0:2d Lease:0x63bc849e}
	I0108 13:25:46.687514   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:7e:2d:4c:da:5f:85 ID:1,7e:2d:4c:da:5f:85 Lease:0x63bb331e}
	I0108 13:25:46.687521   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:ea:2c:fd:1b:d6:7 ID:1,ea:2c:fd:1b:d6:7 Lease:0x63bb3315}
	I0108 13:25:46.687526   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:ea:6f:3b:d4:62:ae ID:1,ea:6f:3b:d4:62:ae Lease:0x63bc8447}
	I0108 13:25:46.687532   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:da:e6:bc:d0:c8:f2 ID:1,da:e6:bc:d0:c8:f2 Lease:0x63bc8436}
	I0108 13:25:46.687540   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:e6:e4:fb:59:30:7a ID:1,e6:e4:fb:59:30:7a Lease:0x63bb32ac}
	I0108 13:25:46.687545   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:56:6e:42:af:88:21 ID:1,56:6e:42:af:88:21 Lease:0x63bc837e}
	I0108 13:25:46.687551   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:fa:f:28:59:92:81 ID:1,fa:f:28:59:92:81 Lease:0x63bc82ef}
	I0108 13:25:46.687558   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:56:aa:6a:b7:76:a0 ID:1,56:aa:6a:b7:76:a0 Lease:0x63bc82be}
	I0108 13:25:46.687564   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ae:fc:4d:f4:df:e0 ID:1,ae:fc:4d:f4:df:e0 Lease:0x63bb2f04}
	I0108 13:25:46.687569   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:be:f1:a2:69:d0:dc ID:1,be:f1:a2:69:d0:dc Lease:0x63bb3166}
	I0108 13:25:46.687584   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:ce:11:55:19:1b:bc ID:1,ce:11:55:19:1b:bc Lease:0x63bb3164}
	I0108 13:25:46.687591   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:1a:b6:29:53:dd:44 ID:1,1a:b6:29:53:dd:44 Lease:0x63bb2ae0}
	I0108 13:25:46.687599   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:ea:1b:94:31:e9:2c ID:1,ea:1b:94:31:e9:2c Lease:0x63bb2acb}
	I0108 13:25:46.687607   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:7e:95:4e:60:39:38 ID:1,7e:95:4e:60:39:38 Lease:0x63bb2aa5}
	I0108 13:25:46.687612   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:56:7:65:39:b8:f4 ID:1,56:7:65:39:b8:f4 Lease:0x63bc7bd7}
	I0108 13:25:46.687617   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:a:47:60:13:a:a6 ID:1,a:47:60:13:a:a6 Lease:0x63bc7b95}
	I0108 13:25:46.687623   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:96:54:b2:b:96:5a ID:1,96:54:b2:b:96:5a Lease:0x63bb2a0b}
	I0108 13:25:46.687629   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:22:33:31:80:e5:53 ID:1,22:33:31:80:e5:53 Lease:0x63bc79dc}
	I0108 13:25:46.687636   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:c6:e3:59:ac:dc:8f ID:1,c6:e3:59:ac:dc:8f Lease:0x63bb2851}
	I0108 13:25:46.687643   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:4a:1c:4:a4:25:f5 ID:1,4a:1c:4:a4:25:f5 Lease:0x63bc78c4}
	I0108 13:25:45.282957   11017 addons.go:488] enableAddons completed in 861.279533ms
	I0108 13:25:45.388516   11017 pod_ready.go:92] pod "kube-apiserver-pause-132406" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:45.388528   11017 pod_ready.go:81] duration metric: took 399.731294ms waiting for pod "kube-apiserver-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:45.388537   11017 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:45.787890   11017 pod_ready.go:92] pod "kube-controller-manager-pause-132406" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:45.787901   11017 pod_ready.go:81] duration metric: took 399.340179ms waiting for pod "kube-controller-manager-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:45.787908   11017 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c2zj2" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:46.187439   11017 pod_ready.go:92] pod "kube-proxy-c2zj2" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:46.187453   11017 pod_ready.go:81] duration metric: took 399.536729ms waiting for pod "kube-proxy-c2zj2" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:46.187459   11017 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:46.588219   11017 pod_ready.go:92] pod "kube-scheduler-pause-132406" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:46.588232   11017 pod_ready.go:81] duration metric: took 400.763589ms waiting for pod "kube-scheduler-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:46.588239   11017 pod_ready.go:38] duration metric: took 2.0892776s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 13:25:46.588288   11017 api_server.go:51] waiting for apiserver process to appear ...
	I0108 13:25:46.588361   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:25:46.598144   11017 api_server.go:71] duration metric: took 2.176485692s to wait for apiserver process to appear ...
	I0108 13:25:46.598158   11017 api_server.go:87] waiting for apiserver healthz status ...
	I0108 13:25:46.598165   11017 api_server.go:252] Checking apiserver healthz at https://192.168.64.27:8443/healthz ...
	I0108 13:25:46.602085   11017 api_server.go:278] https://192.168.64.27:8443/healthz returned 200:
	ok
	I0108 13:25:46.602639   11017 api_server.go:140] control plane version: v1.25.3
	I0108 13:25:46.602648   11017 api_server.go:130] duration metric: took 4.486281ms to wait for apiserver health ...
	I0108 13:25:46.602654   11017 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 13:25:46.791503   11017 system_pods.go:59] 7 kube-system pods found
	I0108 13:25:46.791521   11017 system_pods.go:61] "coredns-565d847f94-t2bdb" [4b1d4603-7531-4c5b-b5d1-17f4712c727e] Running
	I0108 13:25:46.791526   11017 system_pods.go:61] "etcd-pause-132406" [69af71f7-0f42-4ea6-98f6-5720512baa84] Running
	I0108 13:25:46.791529   11017 system_pods.go:61] "kube-apiserver-pause-132406" [e8443dca-cdec-4e05-8ae7-d5ed49988ffa] Running
	I0108 13:25:46.791533   11017 system_pods.go:61] "kube-controller-manager-pause-132406" [01efd276-f21b-4309-ba40-73d8e0790774] Running
	I0108 13:25:46.791538   11017 system_pods.go:61] "kube-proxy-c2zj2" [06f5a965-c191-491e-a8ca-81e45cdab1e0] Running
	I0108 13:25:46.791542   11017 system_pods.go:61] "kube-scheduler-pause-132406" [73b60b1b-4f6f-474f-ba27-15a6c1019ffb] Running
	I0108 13:25:46.791550   11017 system_pods.go:61] "storage-provisioner" [a4d0a073-64e2-44d3-b701-67c31b2c9dcb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 13:25:46.791556   11017 system_pods.go:74] duration metric: took 188.896938ms to wait for pod list to return data ...
	I0108 13:25:46.791561   11017 default_sa.go:34] waiting for default service account to be created ...
	I0108 13:25:46.988179   11017 default_sa.go:45] found service account: "default"
	I0108 13:25:46.988192   11017 default_sa.go:55] duration metric: took 196.618556ms for default service account to be created ...
	I0108 13:25:46.988197   11017 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 13:25:47.191037   11017 system_pods.go:86] 7 kube-system pods found
	I0108 13:25:47.191051   11017 system_pods.go:89] "coredns-565d847f94-t2bdb" [4b1d4603-7531-4c5b-b5d1-17f4712c727e] Running
	I0108 13:25:47.191056   11017 system_pods.go:89] "etcd-pause-132406" [69af71f7-0f42-4ea6-98f6-5720512baa84] Running
	I0108 13:25:47.191059   11017 system_pods.go:89] "kube-apiserver-pause-132406" [e8443dca-cdec-4e05-8ae7-d5ed49988ffa] Running
	I0108 13:25:47.191062   11017 system_pods.go:89] "kube-controller-manager-pause-132406" [01efd276-f21b-4309-ba40-73d8e0790774] Running
	I0108 13:25:47.191068   11017 system_pods.go:89] "kube-proxy-c2zj2" [06f5a965-c191-491e-a8ca-81e45cdab1e0] Running
	I0108 13:25:47.191071   11017 system_pods.go:89] "kube-scheduler-pause-132406" [73b60b1b-4f6f-474f-ba27-15a6c1019ffb] Running
	I0108 13:25:47.191075   11017 system_pods.go:89] "storage-provisioner" [a4d0a073-64e2-44d3-b701-67c31b2c9dcb] Running
	I0108 13:25:47.191079   11017 system_pods.go:126] duration metric: took 202.877582ms to wait for k8s-apps to be running ...
	I0108 13:25:47.191083   11017 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 13:25:47.191143   11017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 13:25:47.200849   11017 system_svc.go:56] duration metric: took 9.761745ms WaitForService to wait for kubelet.
	I0108 13:25:47.200862   11017 kubeadm.go:573] duration metric: took 2.779206372s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 13:25:47.200873   11017 node_conditions.go:102] verifying NodePressure condition ...
	I0108 13:25:47.388983   11017 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 13:25:47.388998   11017 node_conditions.go:123] node cpu capacity is 2
	I0108 13:25:47.389006   11017 node_conditions.go:105] duration metric: took 188.128513ms to run NodePressure ...
	I0108 13:25:47.389012   11017 start.go:217] waiting for startup goroutines ...
	I0108 13:25:47.389347   11017 ssh_runner.go:195] Run: rm -f paused
	I0108 13:25:47.433718   11017 start.go:536] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
	I0108 13:25:47.476634   11017 out.go:177] * Done! kubectl is now configured to use "pause-132406" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sun 2023-01-08 21:24:13 UTC, ends at Sun 2023-01-08 21:25:48 UTC. --
	Jan 08 21:25:25 pause-132406 dockerd[3702]: time="2023-01-08T21:25:25.290659171Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/3314497202fc8ceeb27ca7190e02eafa07a3b2174edff746b07ed7a18bb2797e pid=5489 runtime=io.containerd.runc.v2
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.310282074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.310632315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.310689702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.311253391Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2c864c071578be13dc25e84f4d73ec21beecae7650ed31f40171521323b956bc pid=5652 runtime=io.containerd.runc.v2
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.590607044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.590804961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.590861947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.591114210Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/cfca5ca38f1bb40a1f783df11849538c078a7ea84cd1507a93401e6ac921043c pid=5701 runtime=io.containerd.runc.v2
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.696304305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.696340262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.696348212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.696786183Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/a037098dc5d0363118aa47fc6662a0cb9803f357dbe7488d39ac54fbda264a85 pid=5742 runtime=io.containerd.runc.v2
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.852485421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.852650670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.852752050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.853144561Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2fa6736cca283a0849a4133c4846ff785f1dabecc824ab55422b9fe1df5fb20e pid=5806 runtime=io.containerd.runc.v2
	Jan 08 21:25:45 pause-132406 dockerd[3702]: time="2023-01-08T21:25:45.693887010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:25:45 pause-132406 dockerd[3702]: time="2023-01-08T21:25:45.693975959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:25:45 pause-132406 dockerd[3702]: time="2023-01-08T21:25:45.693985352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:25:45 pause-132406 dockerd[3702]: time="2023-01-08T21:25:45.694546414Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/7509d84ccc611055e0a390b6d4f9edf99f5625ea09b62d1eae87e614b0930aa8 pid=6088 runtime=io.containerd.runc.v2
	Jan 08 21:25:46 pause-132406 dockerd[3702]: time="2023-01-08T21:25:46.042984174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:25:46 pause-132406 dockerd[3702]: time="2023-01-08T21:25:46.043017322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:25:46 pause-132406 dockerd[3702]: time="2023-01-08T21:25:46.043025311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:25:46 pause-132406 dockerd[3702]: time="2023-01-08T21:25:46.043168759Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/47155f3f92e2edb1c9b9544dbec392073d4a267f0e2e171c4c0c8f41eed1b42d pid=6226 runtime=io.containerd.runc.v2
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	47155f3f92e2e       6e38f40d628db       2 seconds ago       Running             storage-provisioner       0                   7509d84ccc611
	2fa6736cca283       5185b96f0becf       18 seconds ago      Running             coredns                   2                   2c864c071578b
	a037098dc5d03       beaaf00edd38a       18 seconds ago      Running             kube-proxy                2                   cfca5ca38f1bb
	3314497202fc8       6d23ec0e8b87e       23 seconds ago      Running             kube-scheduler            3                   7225e68b6cdb9
	2702ef37e8c9f       a8a176a5d5d69       23 seconds ago      Running             etcd                      3                   d6054662c415b
	85b18341d5fa3       6039992312758       24 seconds ago      Running             kube-controller-manager   3                   167990773c8df
	e49c330971e33       0346dbd74bcb9       24 seconds ago      Running             kube-apiserver            3                   7719cf6e2ded6
	6c8e664a440de       6d23ec0e8b87e       26 seconds ago      Created             kube-scheduler            2                   b17d288e92aba
	5836a9370f77e       beaaf00edd38a       26 seconds ago      Created             kube-proxy                1                   d0c6f1675c8df
	bf3a9fcdde4ed       5185b96f0becf       26 seconds ago      Created             coredns                   1                   80b9970570ee9
	359f540cb31f6       6039992312758       27 seconds ago      Created             kube-controller-manager   2                   82b65485dbb4d
	b3ea39090c67a       a8a176a5d5d69       27 seconds ago      Exited              etcd                      2                   d4f72481538e7
	a59e122b43f1f       0346dbd74bcb9       27 seconds ago      Exited              kube-apiserver            2                   b5535145a6cf3
	c2ddc4b3adc5e       5185b96f0becf       53 seconds ago      Exited              coredns                   0                   11b52ad80c153
	
	* 
	* ==> coredns [2fa6736cca28] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7135f430aea492809ab227b028bd16c96f6629e00404d9ec4f44cae029eb3743d1cfe4a9d0cc8fbbd4cfa53556972f2bbf615e7c9e8412e85d290539257166ad
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	
	* 
	* ==> coredns [bf3a9fcdde4e] <==
	* 
	* 
	* ==> coredns [c2ddc4b3adc5] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-132406
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-132406
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286
	                    minikube.k8s.io/name=pause-132406
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_08T13_24_43_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 08 Jan 2023 21:24:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-132406
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 08 Jan 2023 21:25:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 08 Jan 2023 21:25:28 +0000   Sun, 08 Jan 2023 21:24:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 08 Jan 2023 21:25:28 +0000   Sun, 08 Jan 2023 21:24:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 08 Jan 2023 21:25:28 +0000   Sun, 08 Jan 2023 21:24:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 08 Jan 2023 21:25:28 +0000   Sun, 08 Jan 2023 21:25:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.64.27
	  Hostname:    pause-132406
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017572Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017572Ki
	  pods:               110
	System Info:
	  Machine ID:                 7ba663e0089540a7aff02be8cb7e7914
	  System UUID:                c84e11ed-0000-0000-a16b-149d997fca88
	  Boot ID:                    e1c358fb-4be5-406e-aa57-71fbfb8be72e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.21
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-565d847f94-t2bdb                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     54s
	  kube-system                 etcd-pause-132406                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         64s
	  kube-system                 kube-apiserver-pause-132406             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-controller-manager-pause-132406    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-proxy-c2zj2                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kube-system                 kube-scheduler-pause-132406             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 storage-provisioner                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 53s                kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  NodeHasSufficientPID     65s                kubelet          Node pause-132406 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  65s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  65s                kubelet          Node pause-132406 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s                kubelet          Node pause-132406 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                65s                kubelet          Node pause-132406 status is now: NodeReady
	  Normal  Starting                 65s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           54s                node-controller  Node pause-132406 event: Registered Node pause-132406 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-132406 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-132406 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-132406 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                 node-controller  Node pause-132406 event: Registered Node pause-132406 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.891901] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.787838] systemd-fstab-generator[528]: Ignoring "noauto" for root device
	[  +0.090038] systemd-fstab-generator[539]: Ignoring "noauto" for root device
	[  +5.154737] systemd-fstab-generator[759]: Ignoring "noauto" for root device
	[  +1.211845] kauditd_printk_skb: 16 callbacks suppressed
	[  +0.212584] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[  +0.092038] systemd-fstab-generator[932]: Ignoring "noauto" for root device
	[  +0.088864] systemd-fstab-generator[943]: Ignoring "noauto" for root device
	[  +1.451512] systemd-fstab-generator[1094]: Ignoring "noauto" for root device
	[  +0.096327] systemd-fstab-generator[1105]: Ignoring "noauto" for root device
	[  +3.011005] systemd-fstab-generator[1323]: Ignoring "noauto" for root device
	[  +0.609217] kauditd_printk_skb: 68 callbacks suppressed
	[ +14.122766] systemd-fstab-generator[1992]: Ignoring "noauto" for root device
	[ +11.883875] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.253764] systemd-fstab-generator[2883]: Ignoring "noauto" for root device
	[  +0.141331] systemd-fstab-generator[2894]: Ignoring "noauto" for root device
	[Jan 8 21:25] systemd-fstab-generator[2905]: Ignoring "noauto" for root device
	[  +0.401098] kauditd_printk_skb: 18 callbacks suppressed
	[ +16.643218] systemd-fstab-generator[4108]: Ignoring "noauto" for root device
	[  +0.107408] systemd-fstab-generator[4162]: Ignoring "noauto" for root device
	[  +5.496654] systemd-fstab-generator[5099]: Ignoring "noauto" for root device
	[  +6.803519] kauditd_printk_skb: 31 callbacks suppressed
	
	* 
	* ==> etcd [2702ef37e8c9] <==
	* {"level":"info","ts":"2023-01-08T21:25:25.967Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"d9a8ee5ed7997f86","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-01-08T21:25:25.967Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-01-08T21:25:25.968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 switched to configuration voters=(15684047793429249926)"}
	{"level":"info","ts":"2023-01-08T21:25:25.968Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d657f6537ff55566","local-member-id":"d9a8ee5ed7997f86","added-peer-id":"d9a8ee5ed7997f86","added-peer-peer-urls":["https://192.168.64.27:2380"]}
	{"level":"info","ts":"2023-01-08T21:25:25.968Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d657f6537ff55566","local-member-id":"d9a8ee5ed7997f86","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:25:25.968Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:25:25.971Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-08T21:25:25.971Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.64.27:2380"}
	{"level":"info","ts":"2023-01-08T21:25:25.972Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.64.27:2380"}
	{"level":"info","ts":"2023-01-08T21:25:25.972Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d9a8ee5ed7997f86","initial-advertise-peer-urls":["https://192.168.64.27:2380"],"listen-peer-urls":["https://192.168.64.27:2380"],"advertise-client-urls":["https://192.168.64.27:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.27:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-08T21:25:25.972Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 is starting a new election at term 3"}
	{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 received MsgPreVoteResp from d9a8ee5ed7997f86 at term 3"}
	{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 became candidate at term 4"}
	{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 received MsgVoteResp from d9a8ee5ed7997f86 at term 4"}
	{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 became leader at term 4"}
	{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d9a8ee5ed7997f86 elected leader d9a8ee5ed7997f86 at term 4"}
	{"level":"info","ts":"2023-01-08T21:25:26.967Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"d9a8ee5ed7997f86","local-member-attributes":"{Name:pause-132406 ClientURLs:[https://192.168.64.27:2379]}","request-path":"/0/members/d9a8ee5ed7997f86/attributes","cluster-id":"d657f6537ff55566","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-08T21:25:26.967Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T21:25:26.968Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-08T21:25:26.968Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-08T21:25:26.968Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T21:25:26.968Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-08T21:25:26.970Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.64.27:2379"}
	
	* 
	* ==> etcd [b3ea39090c67] <==
	* {"level":"info","ts":"2023-01-08T21:25:22.354Z","caller":"embed/etcd.go:479","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-08T21:25:22.354Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.27:2379"]}
	{"level":"info","ts":"2023-01-08T21:25:22.354Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.4","git-sha":"08407ff76","go-version":"go1.16.15","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-132406","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.64.27:2380"],"listen-peer-urls":["https://192.168.64.27:2380"],"advertise-client-urls":["https://192.168.64.27:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.27:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-size-bytes":2147
483648,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2023-01-08T21:25:22.355Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"417.198µs"}
	{"level":"info","ts":"2023-01-08T21:25:22.363Z","caller":"etcdserver/server.go:529","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2023-01-08T21:25:22.365Z","caller":"etcdserver/raft.go:483","msg":"restarting local member","cluster-id":"d657f6537ff55566","local-member-id":"d9a8ee5ed7997f86","commit-index":399}
	{"level":"info","ts":"2023-01-08T21:25:22.365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 switched to configuration voters=()"}
	{"level":"info","ts":"2023-01-08T21:25:22.365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 became follower at term 3"}
	{"level":"info","ts":"2023-01-08T21:25:22.365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft d9a8ee5ed7997f86 [peers: [], term: 3, commit: 399, applied: 0, lastindex: 399, lastterm: 3]"}
	{"level":"warn","ts":"2023-01-08T21:25:22.366Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2023-01-08T21:25:22.367Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":382}
	{"level":"info","ts":"2023-01-08T21:25:22.368Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2023-01-08T21:25:22.368Z","caller":"etcdserver/corrupt.go:46","msg":"starting initial corruption check","local-member-id":"d9a8ee5ed7997f86","timeout":"7s"}
	{"level":"info","ts":"2023-01-08T21:25:22.369Z","caller":"etcdserver/corrupt.go:116","msg":"initial corruption checking passed; no corruption","local-member-id":"d9a8ee5ed7997f86"}
	{"level":"info","ts":"2023-01-08T21:25:22.369Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"d9a8ee5ed7997f86","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-01-08T21:25:22.369Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-01-08T21:25:22.370Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 switched to configuration voters=(15684047793429249926)"}
	{"level":"info","ts":"2023-01-08T21:25:22.370Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d657f6537ff55566","local-member-id":"d9a8ee5ed7997f86","added-peer-id":"d9a8ee5ed7997f86","added-peer-peer-urls":["https://192.168.64.27:2380"]}
	{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d657f6537ff55566","local-member-id":"d9a8ee5ed7997f86","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d9a8ee5ed7997f86","initial-advertise-peer-urls":["https://192.168.64.27:2380"],"listen-peer-urls":["https://192.168.64.27:2380"],"advertise-client-urls":["https://192.168.64.27:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.27:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.64.27:2380"}
	{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.64.27:2380"}
	
	* 
	* ==> kernel <==
	*  21:25:49 up 1 min,  0 users,  load average: 0.89, 0.30, 0.11
	Linux pause-132406 5.10.57 #1 SMP Sun Jan 8 19:17:02 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [a59e122b43f1] <==
	* 
	* 
	* ==> kube-apiserver [e49c330971e3] <==
	* I0108 21:25:28.700297       1 controller.go:85] Starting OpenAPI V3 controller
	I0108 21:25:28.700429       1 naming_controller.go:291] Starting NamingConditionController
	I0108 21:25:28.700511       1 establishing_controller.go:76] Starting EstablishingController
	I0108 21:25:28.700559       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0108 21:25:28.701254       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0108 21:25:28.701371       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0108 21:25:28.701544       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0108 21:25:28.702007       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0108 21:25:28.782489       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0108 21:25:28.791936       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 21:25:28.792377       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0108 21:25:28.792956       1 cache.go:39] Caches are synced for autoregister controller
	I0108 21:25:28.793154       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0108 21:25:28.795220       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 21:25:28.800502       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0108 21:25:28.858432       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 21:25:29.472691       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0108 21:25:29.697351       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 21:25:30.382302       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0108 21:25:30.390603       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0108 21:25:30.412248       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0108 21:25:30.430489       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 21:25:30.435393       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 21:25:41.061315       1 controller.go:616] quota admission added evaluator for: endpoints
	I0108 21:25:41.169568       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [359f540cb31f] <==
	* 
	* 
	* ==> kube-controller-manager [85b18341d5fa] <==
	* I0108 21:25:41.096790       1 shared_informer.go:262] Caches are synced for expand
	I0108 21:25:41.099296       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0108 21:25:41.115597       1 shared_informer.go:262] Caches are synced for deployment
	I0108 21:25:41.119008       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0108 21:25:41.133526       1 shared_informer.go:262] Caches are synced for node
	I0108 21:25:41.133592       1 range_allocator.go:166] Starting range CIDR allocator
	I0108 21:25:41.133606       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0108 21:25:41.133651       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0108 21:25:41.142840       1 shared_informer.go:262] Caches are synced for daemon sets
	I0108 21:25:41.157497       1 shared_informer.go:262] Caches are synced for taint
	I0108 21:25:41.157759       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	I0108 21:25:41.157993       1 taint_manager.go:209] "Sending events to api server"
	I0108 21:25:41.157772       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	W0108 21:25:41.158417       1 node_lifecycle_controller.go:1058] Missing timestamp for Node pause-132406. Assuming now as a timestamp.
	I0108 21:25:41.158643       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I0108 21:25:41.158045       1 event.go:294] "Event occurred" object="pause-132406" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-132406 event: Registered Node pause-132406 in Controller"
	I0108 21:25:41.160214       1 shared_informer.go:262] Caches are synced for persistent volume
	I0108 21:25:41.161892       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0108 21:25:41.162048       1 shared_informer.go:262] Caches are synced for GC
	I0108 21:25:41.171471       1 shared_informer.go:262] Caches are synced for TTL
	I0108 21:25:41.197886       1 shared_informer.go:262] Caches are synced for resource quota
	I0108 21:25:41.235793       1 shared_informer.go:262] Caches are synced for resource quota
	I0108 21:25:41.610248       1 shared_informer.go:262] Caches are synced for garbage collector
	I0108 21:25:41.657226       1 shared_informer.go:262] Caches are synced for garbage collector
	I0108 21:25:41.657437       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [5836a9370f77] <==
	* 
	* 
	* ==> kube-proxy [a037098dc5d0] <==
	* I0108 21:25:30.850478       1 node.go:163] Successfully retrieved node IP: 192.168.64.27
	I0108 21:25:30.850523       1 server_others.go:138] "Detected node IP" address="192.168.64.27"
	I0108 21:25:30.850546       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0108 21:25:30.900885       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0108 21:25:30.901123       1 server_others.go:206] "Using iptables Proxier"
	I0108 21:25:30.901146       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 21:25:30.902097       1 server.go:661] "Version info" version="v1.25.3"
	I0108 21:25:30.902216       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:25:30.902654       1 config.go:317] "Starting service config controller"
	I0108 21:25:30.902693       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0108 21:25:30.902720       1 config.go:226] "Starting endpoint slice config controller"
	I0108 21:25:30.902731       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0108 21:25:30.903108       1 config.go:444] "Starting node config controller"
	I0108 21:25:30.903471       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0108 21:25:31.002821       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0108 21:25:31.002956       1 shared_informer.go:262] Caches are synced for service config
	I0108 21:25:31.003974       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [3314497202fc] <==
	* I0108 21:25:26.516401       1 serving.go:348] Generated self-signed cert in-memory
	W0108 21:25:28.747634       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0108 21:25:28.747668       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 21:25:28.747676       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0108 21:25:28.747682       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0108 21:25:28.778528       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I0108 21:25:28.778884       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:25:28.780681       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0108 21:25:28.780730       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0108 21:25:28.781271       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 21:25:28.780750       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0108 21:25:28.882423       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [6c8e664a440d] <==
	* 
	* 
	* ==> kubelet <==
	* -- Journal begins at Sun 2023-01-08 21:24:13 UTC, ends at Sun 2023-01-08 21:25:50 UTC. --
	Jan 08 21:25:28 pause-132406 kubelet[5105]: E0108 21:25:28.600279    5105 kubelet.go:2448] "Error getting node" err="node \"pause-132406\" not found"
	Jan 08 21:25:28 pause-132406 kubelet[5105]: E0108 21:25:28.700806    5105 kubelet.go:2448] "Error getting node" err="node \"pause-132406\" not found"
	Jan 08 21:25:28 pause-132406 kubelet[5105]: I0108 21:25:28.801519    5105 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 08 21:25:28 pause-132406 kubelet[5105]: I0108 21:25:28.802334    5105 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 08 21:25:28 pause-132406 kubelet[5105]: I0108 21:25:28.818830    5105 kubelet_node_status.go:108] "Node was previously registered" node="pause-132406"
	Jan 08 21:25:28 pause-132406 kubelet[5105]: I0108 21:25:28.818970    5105 kubelet_node_status.go:73] "Successfully registered node" node="pause-132406"
	Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.648000    5105 apiserver.go:52] "Watching apiserver"
	Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.649885    5105 topology_manager.go:205] "Topology Admit Handler"
	Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.649962    5105 topology_manager.go:205] "Topology Admit Handler"
	Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.709981    5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b1d4603-7531-4c5b-b5d1-17f4712c727e-config-volume\") pod \"coredns-565d847f94-t2bdb\" (UID: \"4b1d4603-7531-4c5b-b5d1-17f4712c727e\") " pod="kube-system/coredns-565d847f94-t2bdb"
	Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.710359    5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm4nq\" (UniqueName: \"kubernetes.io/projected/4b1d4603-7531-4c5b-b5d1-17f4712c727e-kube-api-access-bm4nq\") pod \"coredns-565d847f94-t2bdb\" (UID: \"4b1d4603-7531-4c5b-b5d1-17f4712c727e\") " pod="kube-system/coredns-565d847f94-t2bdb"
	Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.710457    5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/06f5a965-c191-491e-a8ca-81e45cdab1e0-kube-proxy\") pod \"kube-proxy-c2zj2\" (UID: \"06f5a965-c191-491e-a8ca-81e45cdab1e0\") " pod="kube-system/kube-proxy-c2zj2"
	Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.710554    5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06f5a965-c191-491e-a8ca-81e45cdab1e0-xtables-lock\") pod \"kube-proxy-c2zj2\" (UID: \"06f5a965-c191-491e-a8ca-81e45cdab1e0\") " pod="kube-system/kube-proxy-c2zj2"
	Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.710604    5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzq48\" (UniqueName: \"kubernetes.io/projected/06f5a965-c191-491e-a8ca-81e45cdab1e0-kube-api-access-lzq48\") pod \"kube-proxy-c2zj2\" (UID: \"06f5a965-c191-491e-a8ca-81e45cdab1e0\") " pod="kube-system/kube-proxy-c2zj2"
	Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.710707    5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06f5a965-c191-491e-a8ca-81e45cdab1e0-lib-modules\") pod \"kube-proxy-c2zj2\" (UID: \"06f5a965-c191-491e-a8ca-81e45cdab1e0\") " pod="kube-system/kube-proxy-c2zj2"
	Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.710785    5105 reconciler.go:169] "Reconciler: start to sync state"
	Jan 08 21:25:30 pause-132406 kubelet[5105]: I0108 21:25:30.786116    5105 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2c864c071578be13dc25e84f4d73ec21beecae7650ed31f40171521323b956bc"
	Jan 08 21:25:32 pause-132406 kubelet[5105]: I0108 21:25:32.815079    5105 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Jan 08 21:25:38 pause-132406 kubelet[5105]: I0108 21:25:38.949973    5105 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Jan 08 21:25:45 pause-132406 kubelet[5105]: I0108 21:25:45.283961    5105 topology_manager.go:205] "Topology Admit Handler"
	Jan 08 21:25:45 pause-132406 kubelet[5105]: E0108 21:25:45.284028    5105 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="877d71f1-d869-4d8d-8534-9b676cc5beb0" containerName="coredns"
	Jan 08 21:25:45 pause-132406 kubelet[5105]: I0108 21:25:45.284048    5105 memory_manager.go:345] "RemoveStaleState removing state" podUID="877d71f1-d869-4d8d-8534-9b676cc5beb0" containerName="coredns"
	Jan 08 21:25:45 pause-132406 kubelet[5105]: I0108 21:25:45.379908    5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a4d0a073-64e2-44d3-b701-67c31b2c9dcb-tmp\") pod \"storage-provisioner\" (UID: \"a4d0a073-64e2-44d3-b701-67c31b2c9dcb\") " pod="kube-system/storage-provisioner"
	Jan 08 21:25:45 pause-132406 kubelet[5105]: I0108 21:25:45.380046    5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5dd8\" (UniqueName: \"kubernetes.io/projected/a4d0a073-64e2-44d3-b701-67c31b2c9dcb-kube-api-access-b5dd8\") pod \"storage-provisioner\" (UID: \"a4d0a073-64e2-44d3-b701-67c31b2c9dcb\") " pod="kube-system/storage-provisioner"
	Jan 08 21:25:45 pause-132406 kubelet[5105]: I0108 21:25:45.964235    5105 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="7509d84ccc611055e0a390b6d4f9edf99f5625ea09b62d1eae87e614b0930aa8"
	
	* 
	* ==> storage-provisioner [47155f3f92e2] <==
	* I0108 21:25:46.098217       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 21:25:46.107255       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 21:25:46.107432       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 21:25:46.112096       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 21:25:46.112481       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-132406_786bf454-c8d9-4c47-a499-d6161363a1e5!
	I0108 21:25:46.113189       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4245f6bb-b0ff-44ce-bc47-687e46bad904", APIVersion:"v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-132406_786bf454-c8d9-4c47-a499-d6161363a1e5 became leader
	I0108 21:25:46.217361       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-132406_786bf454-c8d9-4c47-a499-d6161363a1e5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-132406 -n pause-132406
helpers_test.go:261: (dbg) Run:  kubectl --context pause-132406 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context pause-132406 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-132406 describe pod : exit status 1 (39.854254ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context pause-132406 describe pod : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-132406 -n pause-132406
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p pause-132406 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p pause-132406 logs -n 25: (2.828323591s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p force-systemd-flag-131733          | force-systemd-flag-131733 | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:18 PST |
	| start   | -p cert-expiration-131814             | cert-expiration-131814    | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:18 PST |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| ssh     | docker-flags-131736 ssh               | docker-flags-131736       | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:18 PST |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-131736 ssh               | docker-flags-131736       | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:18 PST |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-131736                | docker-flags-131736       | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:18 PST |
	| start   | -p cert-options-131823                | cert-options-131823       | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:19 PST |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| ssh     | cert-options-131823 ssh               | cert-options-131823       | jenkins | v1.28.0 | 08 Jan 23 13:19 PST | 08 Jan 23 13:19 PST |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-131823 -- sudo        | cert-options-131823       | jenkins | v1.28.0 | 08 Jan 23 13:19 PST | 08 Jan 23 13:19 PST |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-131823                | cert-options-131823       | jenkins | v1.28.0 | 08 Jan 23 13:19 PST | 08 Jan 23 13:19 PST |
	| start   | -p running-upgrade-131911             | running-upgrade-131911    | jenkins | v1.28.0 | 08 Jan 23 13:20 PST | 08 Jan 23 13:21 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-131911             | running-upgrade-131911    | jenkins | v1.28.0 | 08 Jan 23 13:21 PST | 08 Jan 23 13:21 PST |
	| start   | -p kubernetes-upgrade-132147          | kubernetes-upgrade-132147 | jenkins | v1.28.0 | 08 Jan 23 13:21 PST | 08 Jan 23 13:22 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| start   | -p cert-expiration-131814             | cert-expiration-131814    | jenkins | v1.28.0 | 08 Jan 23 13:21 PST | 08 Jan 23 13:22 PST |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-131814             | cert-expiration-131814    | jenkins | v1.28.0 | 08 Jan 23 13:22 PST | 08 Jan 23 13:22 PST |
	| stop    | -p kubernetes-upgrade-132147          | kubernetes-upgrade-132147 | jenkins | v1.28.0 | 08 Jan 23 13:22 PST | 08 Jan 23 13:23 PST |
	| start   | -p kubernetes-upgrade-132147          | kubernetes-upgrade-132147 | jenkins | v1.28.0 | 08 Jan 23 13:23 PST | 08 Jan 23 13:23 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-132147          | kubernetes-upgrade-132147 | jenkins | v1.28.0 | 08 Jan 23 13:23 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-132147          | kubernetes-upgrade-132147 | jenkins | v1.28.0 | 08 Jan 23 13:23 PST | 08 Jan 23 13:24 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-132147          | kubernetes-upgrade-132147 | jenkins | v1.28.0 | 08 Jan 23 13:24 PST | 08 Jan 23 13:24 PST |
	| start   | -p pause-132406 --memory=2048         | pause-132406              | jenkins | v1.28.0 | 08 Jan 23 13:24 PST | 08 Jan 23 13:24 PST |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=hyperkit          |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-132230             | stopped-upgrade-132230    | jenkins | v1.28.0 | 08 Jan 23 13:24 PST | 08 Jan 23 13:25 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| start   | -p pause-132406                       | pause-132406              | jenkins | v1.28.0 | 08 Jan 23 13:24 PST | 08 Jan 23 13:25 PST |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-132230             | stopped-upgrade-132230    | jenkins | v1.28.0 | 08 Jan 23 13:25 PST | 08 Jan 23 13:25 PST |
	| start   | -p NoKubernetes-132541                | NoKubernetes-132541       | jenkins | v1.28.0 | 08 Jan 23 13:25 PST |                     |
	|         | --no-kubernetes                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20             |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-132541                | NoKubernetes-132541       | jenkins | v1.28.0 | 08 Jan 23 13:25 PST |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 13:25:41
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 13:25:41.864570   11086 out.go:296] Setting OutFile to fd 1 ...
	I0108 13:25:41.864753   11086 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 13:25:41.864756   11086 out.go:309] Setting ErrFile to fd 2...
	I0108 13:25:41.864759   11086 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 13:25:41.864887   11086 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3013/.minikube/bin
	I0108 13:25:41.865398   11086 out.go:303] Setting JSON to false
	I0108 13:25:41.884483   11086 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5115,"bootTime":1673208026,"procs":427,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0108 13:25:41.884582   11086 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0108 13:25:41.906843   11086 out.go:177] * [NoKubernetes-132541] minikube v1.28.0 on Darwin 13.0.1
	I0108 13:25:41.948550   11086 notify.go:220] Checking for updates...
	I0108 13:25:41.970851   11086 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 13:25:41.992551   11086 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3013/kubeconfig
	I0108 13:25:42.013641   11086 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 13:25:42.034777   11086 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 13:25:42.056742   11086 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3013/.minikube
	I0108 13:25:42.079417   11086 config.go:180] Loaded profile config "pause-132406": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 13:25:42.079463   11086 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 13:25:42.107801   11086 out.go:177] * Using the hyperkit driver based on user configuration
	I0108 13:25:42.149620   11086 start.go:294] selected driver: hyperkit
	I0108 13:25:42.149635   11086 start.go:838] validating driver "hyperkit" against <nil>
	I0108 13:25:42.149659   11086 start.go:849] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 13:25:42.149782   11086 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 13:25:42.150003   11086 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15565-3013/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0108 13:25:42.158308   11086 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.28.0
	I0108 13:25:42.161836   11086 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:25:42.161854   11086 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0108 13:25:42.161898   11086 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I0108 13:25:42.164327   11086 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0108 13:25:42.164430   11086 start_flags.go:892] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 13:25:42.164452   11086 cni.go:95] Creating CNI manager for ""
	I0108 13:25:42.164459   11086 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 13:25:42.164468   11086 start_flags.go:317] config:
	{Name:NoKubernetes-132541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:NoKubernetes-132541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:25:42.164581   11086 iso.go:125] acquiring lock: {Name:mk509bccdb22b8c95ebe7c0f784c1151265efda4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 13:25:42.222410   11086 out.go:177] * Starting control plane node NoKubernetes-132541 in cluster NoKubernetes-132541
	I0108 13:25:42.259872   11086 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 13:25:42.260043   11086 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-3013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0108 13:25:42.260082   11086 cache.go:57] Caching tarball of preloaded images
	I0108 13:25:42.260294   11086 preload.go:174] Found /Users/jenkins/minikube-integration/15565-3013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 13:25:42.260312   11086 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0108 13:25:42.260462   11086 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/NoKubernetes-132541/config.json ...
	I0108 13:25:42.260510   11086 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/NoKubernetes-132541/config.json: {Name:mkb313010fa03f74b48c17380336d5ac233d014a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:25:42.261062   11086 cache.go:193] Successfully downloaded all kic artifacts
	I0108 13:25:42.261110   11086 start.go:364] acquiring machines lock for NoKubernetes-132541: {Name:mk29e5f49e96ee5817a491da62b8738aae3fb506 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 13:25:42.261283   11086 start.go:368] acquired machines lock for "NoKubernetes-132541" in 157.235µs
	I0108 13:25:42.261346   11086 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-132541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.28.0-1673190013-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.25.3 ClusterName:NoKubernetes-132541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 13:25:42.261435   11086 start.go:125] createHost starting for "" (driver="hyperkit")
	I0108 13:25:40.902617   11017 pod_ready.go:102] pod "kube-apiserver-pause-132406" in "kube-system" namespace has status "Ready":"False"
	I0108 13:25:43.390748   11017 pod_ready.go:92] pod "kube-apiserver-pause-132406" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:43.390761   11017 pod_ready.go:81] duration metric: took 4.508462855s waiting for pod "kube-apiserver-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:43.390767   11017 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:43.400557   11017 pod_ready.go:92] pod "kube-controller-manager-pause-132406" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:43.400568   11017 pod_ready.go:81] duration metric: took 9.796554ms waiting for pod "kube-controller-manager-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:43.400574   11017 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c2zj2" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:43.403156   11017 pod_ready.go:92] pod "kube-proxy-c2zj2" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:43.403166   11017 pod_ready.go:81] duration metric: took 2.587107ms waiting for pod "kube-proxy-c2zj2" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:43.403174   11017 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:44.411451   11017 pod_ready.go:92] pod "kube-scheduler-pause-132406" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:44.411465   11017 pod_ready.go:81] duration metric: took 1.008282022s waiting for pod "kube-scheduler-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:44.411472   11017 pod_ready.go:38] duration metric: took 14.051708866s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 13:25:44.411481   11017 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 13:25:44.418863   11017 ops.go:34] apiserver oom_adj: -16
	I0108 13:25:44.418873   11017 kubeadm.go:631] restartCluster took 25.948745972s
	I0108 13:25:44.418878   11017 kubeadm.go:398] StartCluster complete in 25.970227663s
	I0108 13:25:44.418886   11017 settings.go:142] acquiring lock: {Name:mk8df047e431900506a7782529ec776808797932 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:25:44.418977   11017 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15565-3013/kubeconfig
	I0108 13:25:44.419424   11017 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3013/kubeconfig: {Name:mk12e69a052d3b808fcdcd72ad62f9045d7b154d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:25:44.419963   11017 kapi.go:59] client config for pause-132406: &rest.Config{Host:"https://192.168.64.27:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 13:25:44.421604   11017 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-132406" rescaled to 1
	I0108 13:25:44.421632   11017 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.64.27 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 13:25:44.421642   11017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 13:25:44.421664   11017 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I0108 13:25:44.464718   11017 out.go:177] * Verifying Kubernetes components...
	I0108 13:25:44.464758   11017 addons.go:65] Setting storage-provisioner=true in profile "pause-132406"
	I0108 13:25:44.485523   11017 addons.go:227] Setting addon storage-provisioner=true in "pause-132406"
	I0108 13:25:44.464765   11017 addons.go:65] Setting default-storageclass=true in profile "pause-132406"
	I0108 13:25:44.421794   11017 config.go:180] Loaded profile config "pause-132406": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 13:25:44.475539   11017 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	W0108 13:25:44.485550   11017 addons.go:236] addon storage-provisioner should already be in state true
	I0108 13:25:44.485556   11017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 13:25:44.485552   11017 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-132406"
	I0108 13:25:44.485605   11017 host.go:66] Checking if "pause-132406" exists ...
	I0108 13:25:44.485877   11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:25:44.485890   11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:25:44.485895   11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 13:25:44.485909   11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 13:25:44.493358   11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52925
	I0108 13:25:44.493704   11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52927
	I0108 13:25:44.493766   11017 main.go:134] libmachine: () Calling .GetVersion
	I0108 13:25:44.494093   11017 main.go:134] libmachine: () Calling .GetVersion
	I0108 13:25:44.494097   11017 main.go:134] libmachine: Using API Version  1
	I0108 13:25:44.494108   11017 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 13:25:44.494325   11017 main.go:134] libmachine: () Calling .GetMachineName
	I0108 13:25:44.494420   11017 main.go:134] libmachine: (pause-132406) Calling .GetState
	I0108 13:25:44.494437   11017 main.go:134] libmachine: Using API Version  1
	I0108 13:25:44.494450   11017 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 13:25:44.494517   11017 main.go:134] libmachine: (pause-132406) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 13:25:44.494607   11017 main.go:134] libmachine: (pause-132406) DBG | hyperkit pid from json: 10839
	I0108 13:25:44.494633   11017 main.go:134] libmachine: () Calling .GetMachineName
	I0108 13:25:44.495008   11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:25:44.495031   11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 13:25:44.496752   11017 node_ready.go:35] waiting up to 6m0s for node "pause-132406" to be "Ready" ...
	I0108 13:25:44.497335   11017 kapi.go:59] client config for pause-132406: &rest.Config{Host:"https://192.168.64.27:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 13:25:44.498930   11017 node_ready.go:49] node "pause-132406" has status "Ready":"True"
	I0108 13:25:44.498941   11017 node_ready.go:38] duration metric: took 2.059705ms waiting for node "pause-132406" to be "Ready" ...
	I0108 13:25:44.498947   11017 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 13:25:44.499578   11017 addons.go:227] Setting addon default-storageclass=true in "pause-132406"
	W0108 13:25:44.499589   11017 addons.go:236] addon default-storageclass should already be in state true
	I0108 13:25:44.499606   11017 host.go:66] Checking if "pause-132406" exists ...
	I0108 13:25:44.499869   11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:25:44.499888   11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 13:25:44.502432   11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52929
	I0108 13:25:44.502793   11017 main.go:134] libmachine: () Calling .GetVersion
	I0108 13:25:44.503162   11017 main.go:134] libmachine: Using API Version  1
	I0108 13:25:44.503182   11017 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 13:25:44.503337   11017 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-t2bdb" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:44.503425   11017 main.go:134] libmachine: () Calling .GetMachineName
	I0108 13:25:44.503542   11017 main.go:134] libmachine: (pause-132406) Calling .GetState
	I0108 13:25:44.503638   11017 main.go:134] libmachine: (pause-132406) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 13:25:44.503741   11017 main.go:134] libmachine: (pause-132406) DBG | hyperkit pid from json: 10839
	I0108 13:25:44.505184   11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
	I0108 13:25:44.526650   11017 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 13:25:44.507300   11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52931
	I0108 13:25:44.527054   11017 main.go:134] libmachine: () Calling .GetVersion
	I0108 13:25:44.547766   11017 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 13:25:44.547777   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 13:25:44.547790   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
	I0108 13:25:44.547909   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
	I0108 13:25:44.548050   11017 main.go:134] libmachine: Using API Version  1
	I0108 13:25:44.548063   11017 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 13:25:44.548095   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
	I0108 13:25:44.548190   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
	I0108 13:25:44.548274   11017 main.go:134] libmachine: () Calling .GetMachineName
	I0108 13:25:44.548290   11017 sshutil.go:53] new ssh client: &{IP:192.168.64.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/pause-132406/id_rsa Username:docker}
	I0108 13:25:44.548649   11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:25:44.548675   11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 13:25:44.555825   11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52934
	I0108 13:25:44.556201   11017 main.go:134] libmachine: () Calling .GetVersion
	I0108 13:25:44.556573   11017 main.go:134] libmachine: Using API Version  1
	I0108 13:25:44.556585   11017 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 13:25:44.556785   11017 main.go:134] libmachine: () Calling .GetMachineName
	I0108 13:25:44.556890   11017 main.go:134] libmachine: (pause-132406) Calling .GetState
	I0108 13:25:44.556978   11017 main.go:134] libmachine: (pause-132406) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 13:25:44.557074   11017 main.go:134] libmachine: (pause-132406) DBG | hyperkit pid from json: 10839
	I0108 13:25:44.558022   11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
	I0108 13:25:44.558187   11017 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 13:25:44.558196   11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 13:25:44.558205   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
	I0108 13:25:44.558288   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
	I0108 13:25:44.558385   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
	I0108 13:25:44.558470   11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
	I0108 13:25:44.558547   11017 sshutil.go:53] new ssh client: &{IP:192.168.64.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/pause-132406/id_rsa Username:docker}
	I0108 13:25:44.587109   11017 pod_ready.go:92] pod "coredns-565d847f94-t2bdb" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:44.587119   11017 pod_ready.go:81] duration metric: took 83.771886ms waiting for pod "coredns-565d847f94-t2bdb" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:44.587128   11017 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:44.599174   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 13:25:44.609018   11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 13:25:44.988772   11017 pod_ready.go:92] pod "etcd-pause-132406" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:44.988783   11017 pod_ready.go:81] duration metric: took 401.647771ms waiting for pod "etcd-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:44.988791   11017 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:45.186660   11017 main.go:134] libmachine: Making call to close driver server
	I0108 13:25:45.186678   11017 main.go:134] libmachine: (pause-132406) Calling .Close
	I0108 13:25:45.186841   11017 main.go:134] libmachine: Making call to close driver server
	I0108 13:25:45.186866   11017 main.go:134] libmachine: (pause-132406) Calling .Close
	I0108 13:25:45.186869   11017 main.go:134] libmachine: Successfully made call to close driver server
	I0108 13:25:45.186878   11017 main.go:134] libmachine: (pause-132406) DBG | Closing plugin on server side
	I0108 13:25:45.186883   11017 main.go:134] libmachine: Making call to close connection to plugin binary
	I0108 13:25:45.186898   11017 main.go:134] libmachine: Making call to close driver server
	I0108 13:25:45.186912   11017 main.go:134] libmachine: (pause-132406) Calling .Close
	I0108 13:25:45.187089   11017 main.go:134] libmachine: Successfully made call to close driver server
	I0108 13:25:45.187104   11017 main.go:134] libmachine: Making call to close connection to plugin binary
	I0108 13:25:45.187114   11017 main.go:134] libmachine: (pause-132406) DBG | Closing plugin on server side
	I0108 13:25:45.187131   11017 main.go:134] libmachine: (pause-132406) DBG | Closing plugin on server side
	I0108 13:25:45.187115   11017 main.go:134] libmachine: Successfully made call to close driver server
	I0108 13:25:45.187146   11017 main.go:134] libmachine: Making call to close connection to plugin binary
	I0108 13:25:45.187125   11017 main.go:134] libmachine: Making call to close driver server
	I0108 13:25:45.187159   11017 main.go:134] libmachine: Making call to close driver server
	I0108 13:25:45.187192   11017 main.go:134] libmachine: (pause-132406) Calling .Close
	I0108 13:25:45.187230   11017 main.go:134] libmachine: (pause-132406) Calling .Close
	I0108 13:25:45.187349   11017 main.go:134] libmachine: Successfully made call to close driver server
	I0108 13:25:45.187402   11017 main.go:134] libmachine: (pause-132406) DBG | Closing plugin on server side
	I0108 13:25:45.187401   11017 main.go:134] libmachine: Making call to close connection to plugin binary
	I0108 13:25:45.187428   11017 main.go:134] libmachine: Successfully made call to close driver server
	I0108 13:25:45.187436   11017 main.go:134] libmachine: Making call to close connection to plugin binary
	I0108 13:25:45.245840   11017 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 13:25:42.303634   11086 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I0108 13:25:42.304124   11086 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:25:42.304196   11086 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 13:25:42.312348   11086 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52923
	I0108 13:25:42.312703   11086 main.go:134] libmachine: () Calling .GetVersion
	I0108 13:25:42.313112   11086 main.go:134] libmachine: Using API Version  1
	I0108 13:25:42.313119   11086 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 13:25:42.313353   11086 main.go:134] libmachine: () Calling .GetMachineName
	I0108 13:25:42.313462   11086 main.go:134] libmachine: (NoKubernetes-132541) Calling .GetMachineName
	I0108 13:25:42.313550   11086 main.go:134] libmachine: (NoKubernetes-132541) Calling .DriverName
	I0108 13:25:42.313649   11086 start.go:159] libmachine.API.Create for "NoKubernetes-132541" (driver="hyperkit")
	I0108 13:25:42.313676   11086 client.go:168] LocalClient.Create starting
	I0108 13:25:42.313716   11086 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/ca.pem
	I0108 13:25:42.313761   11086 main.go:134] libmachine: Decoding PEM data...
	I0108 13:25:42.313774   11086 main.go:134] libmachine: Parsing certificate...
	I0108 13:25:42.313838   11086 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/cert.pem
	I0108 13:25:42.313868   11086 main.go:134] libmachine: Decoding PEM data...
	I0108 13:25:42.313878   11086 main.go:134] libmachine: Parsing certificate...
	I0108 13:25:42.313894   11086 main.go:134] libmachine: Running pre-create checks...
	I0108 13:25:42.313905   11086 main.go:134] libmachine: (NoKubernetes-132541) Calling .PreCreateCheck
	I0108 13:25:42.313976   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 13:25:42.314125   11086 main.go:134] libmachine: (NoKubernetes-132541) Calling .GetConfigRaw
	I0108 13:25:42.314532   11086 main.go:134] libmachine: Creating machine...
	I0108 13:25:42.314538   11086 main.go:134] libmachine: (NoKubernetes-132541) Calling .Create
	I0108 13:25:42.314603   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 13:25:42.314731   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | I0108 13:25:42.314597   11094 common.go:116] Making disk image using store path: /Users/jenkins/minikube-integration/15565-3013/.minikube
	I0108 13:25:42.314792   11086 main.go:134] libmachine: (NoKubernetes-132541) Downloading /Users/jenkins/minikube-integration/15565-3013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15565-3013/.minikube/cache/iso/amd64/minikube-v1.28.0-1673190013-15565-amd64.iso...
	I0108 13:25:42.460398   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | I0108 13:25:42.460335   11094 common.go:123] Creating ssh key: /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/id_rsa...
	I0108 13:25:42.503141   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | I0108 13:25:42.503046   11094 common.go:129] Creating raw disk image: /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/NoKubernetes-132541.rawdisk...
	I0108 13:25:42.503157   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Writing magic tar header
	I0108 13:25:42.503171   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Writing SSH key tar header
	I0108 13:25:42.503539   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | I0108 13:25:42.503489   11094 common.go:143] Fixing permissions on /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541 ...
	I0108 13:25:42.653730   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 13:25:42.653744   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/hyperkit.pid
	I0108 13:25:42.653784   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Using UUID 014f6508-8f9b-11ed-91e7-149d997fca88
	I0108 13:25:42.678089   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Generated MAC 4e:f0:b3:1f:f:2b
	I0108 13:25:42.678104   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=NoKubernetes-132541
	I0108 13:25:42.678131   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"014f6508-8f9b-11ed-91e7-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000250e70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/bzimage", Initrd:"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0108 13:25:42.678168   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"014f6508-8f9b-11ed-91e7-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000250e70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/bzimage", Initrd:"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0108 13:25:42.678217   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/hyperkit.pid", "-c", "2", "-m", "6000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "014f6508-8f9b-11ed-91e7-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/NoKubernetes-132541.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/tty,log=/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/bzimage,/Users/jenkins/m
inikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=NoKubernetes-132541"}
	I0108 13:25:42.678253   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/hyperkit.pid -c 2 -m 6000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 014f6508-8f9b-11ed-91e7-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/NoKubernetes-132541.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/tty,log=/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/console-ring -f kexec,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/bzimage,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes
-132541/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=NoKubernetes-132541"
	I0108 13:25:42.678262   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0108 13:25:42.679546   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 DEBUG: hyperkit: Pid is 11097
	I0108 13:25:42.679869   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Attempt 0
	I0108 13:25:42.679885   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 13:25:42.679938   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | hyperkit pid from json: 11097
	I0108 13:25:42.680931   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Searching for 4e:f0:b3:1f:f:2b in /var/db/dhcpd_leases ...
	I0108 13:25:42.681019   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0108 13:25:42.681065   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:da:4c:f9:c0:83:47 ID:1,da:4c:f9:c0:83:47 Lease:0x63bc8612}
	I0108 13:25:42.681091   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:a2:44:36:6b:68:b8 ID:1,a2:44:36:6b:68:b8 Lease:0x63bc85ff}
	I0108 13:25:42.681107   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:9a:64:4e:b9:b9:44 ID:1,9a:64:4e:b9:b9:44 Lease:0x63bb3475}
	I0108 13:25:42.681116   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:ba:cc:22:10:41:cb ID:1,ba:cc:22:10:41:cb Lease:0x63bc84e7}
	I0108 13:25:42.681130   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:3e:f7:d1:11:f9:61 ID:1,3e:f7:d1:11:f9:61 Lease:0x63bb334e}
	I0108 13:25:42.681144   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:1a:20:c6:f:e0:2d ID:1,1a:20:c6:f:e0:2d Lease:0x63bc849e}
	I0108 13:25:42.681154   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:7e:2d:4c:da:5f:85 ID:1,7e:2d:4c:da:5f:85 Lease:0x63bb331e}
	I0108 13:25:42.681167   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:ea:2c:fd:1b:d6:7 ID:1,ea:2c:fd:1b:d6:7 Lease:0x63bb3315}
	I0108 13:25:42.681181   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:ea:6f:3b:d4:62:ae ID:1,ea:6f:3b:d4:62:ae Lease:0x63bc8447}
	I0108 13:25:42.681193   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:da:e6:bc:d0:c8:f2 ID:1,da:e6:bc:d0:c8:f2 Lease:0x63bc8436}
	I0108 13:25:42.681203   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:e6:e4:fb:59:30:7a ID:1,e6:e4:fb:59:30:7a Lease:0x63bb32ac}
	I0108 13:25:42.681218   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:56:6e:42:af:88:21 ID:1,56:6e:42:af:88:21 Lease:0x63bc837e}
	I0108 13:25:42.681228   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:fa:f:28:59:92:81 ID:1,fa:f:28:59:92:81 Lease:0x63bc82ef}
	I0108 13:25:42.681241   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:56:aa:6a:b7:76:a0 ID:1,56:aa:6a:b7:76:a0 Lease:0x63bc82be}
	I0108 13:25:42.681253   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ae:fc:4d:f4:df:e0 ID:1,ae:fc:4d:f4:df:e0 Lease:0x63bb2f04}
	I0108 13:25:42.681264   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:be:f1:a2:69:d0:dc ID:1,be:f1:a2:69:d0:dc Lease:0x63bb3166}
	I0108 13:25:42.681275   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:ce:11:55:19:1b:bc ID:1,ce:11:55:19:1b:bc Lease:0x63bb3164}
	I0108 13:25:42.681297   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:1a:b6:29:53:dd:44 ID:1,1a:b6:29:53:dd:44 Lease:0x63bb2ae0}
	I0108 13:25:42.681310   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:ea:1b:94:31:e9:2c ID:1,ea:1b:94:31:e9:2c Lease:0x63bb2acb}
	I0108 13:25:42.681320   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:7e:95:4e:60:39:38 ID:1,7e:95:4e:60:39:38 Lease:0x63bb2aa5}
	I0108 13:25:42.681333   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:56:7:65:39:b8:f4 ID:1,56:7:65:39:b8:f4 Lease:0x63bc7bd7}
	I0108 13:25:42.681343   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:a:47:60:13:a:a6 ID:1,a:47:60:13:a:a6 Lease:0x63bc7b95}
	I0108 13:25:42.681355   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:96:54:b2:b:96:5a ID:1,96:54:b2:b:96:5a Lease:0x63bb2a0b}
	I0108 13:25:42.681370   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:22:33:31:80:e5:53 ID:1,22:33:31:80:e5:53 Lease:0x63bc79dc}
	I0108 13:25:42.681383   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:c6:e3:59:ac:dc:8f ID:1,c6:e3:59:ac:dc:8f Lease:0x63bb2851}
	I0108 13:25:42.681398   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:4a:1c:4:a4:25:f5 ID:1,4a:1c:4:a4:25:f5 Lease:0x63bc78c4}
	I0108 13:25:42.686391   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0108 13:25:42.695558   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0108 13:25:42.696153   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0108 13:25:42.696174   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0108 13:25:42.696188   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0108 13:25:42.696199   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0108 13:25:43.257820   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0108 13:25:43.257832   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0108 13:25:43.362873   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0108 13:25:43.362883   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0108 13:25:43.362890   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0108 13:25:43.362899   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0108 13:25:43.363783   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0108 13:25:43.363789   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0108 13:25:44.682748   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Attempt 1
	I0108 13:25:44.682757   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 13:25:44.682837   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | hyperkit pid from json: 11097
	I0108 13:25:44.684384   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Searching for 4e:f0:b3:1f:f:2b in /var/db/dhcpd_leases ...
	I0108 13:25:44.684451   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0108 13:25:44.684458   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:da:4c:f9:c0:83:47 ID:1,da:4c:f9:c0:83:47 Lease:0x63bc8612}
	I0108 13:25:44.684475   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:a2:44:36:6b:68:b8 ID:1,a2:44:36:6b:68:b8 Lease:0x63bc85ff}
	I0108 13:25:44.684481   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:9a:64:4e:b9:b9:44 ID:1,9a:64:4e:b9:b9:44 Lease:0x63bb3475}
	I0108 13:25:44.684487   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:ba:cc:22:10:41:cb ID:1,ba:cc:22:10:41:cb Lease:0x63bc84e7}
	I0108 13:25:44.684492   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:3e:f7:d1:11:f9:61 ID:1,3e:f7:d1:11:f9:61 Lease:0x63bb334e}
	I0108 13:25:44.684504   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:1a:20:c6:f:e0:2d ID:1,1a:20:c6:f:e0:2d Lease:0x63bc849e}
	I0108 13:25:44.684509   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:7e:2d:4c:da:5f:85 ID:1,7e:2d:4c:da:5f:85 Lease:0x63bb331e}
	I0108 13:25:44.684516   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:ea:2c:fd:1b:d6:7 ID:1,ea:2c:fd:1b:d6:7 Lease:0x63bb3315}
	I0108 13:25:44.684521   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:ea:6f:3b:d4:62:ae ID:1,ea:6f:3b:d4:62:ae Lease:0x63bc8447}
	I0108 13:25:44.684527   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:da:e6:bc:d0:c8:f2 ID:1,da:e6:bc:d0:c8:f2 Lease:0x63bc8436}
	I0108 13:25:44.684533   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:e6:e4:fb:59:30:7a ID:1,e6:e4:fb:59:30:7a Lease:0x63bb32ac}
	I0108 13:25:44.684541   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:56:6e:42:af:88:21 ID:1,56:6e:42:af:88:21 Lease:0x63bc837e}
	I0108 13:25:44.684548   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:fa:f:28:59:92:81 ID:1,fa:f:28:59:92:81 Lease:0x63bc82ef}
	I0108 13:25:44.684554   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:56:aa:6a:b7:76:a0 ID:1,56:aa:6a:b7:76:a0 Lease:0x63bc82be}
	I0108 13:25:44.684559   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ae:fc:4d:f4:df:e0 ID:1,ae:fc:4d:f4:df:e0 Lease:0x63bb2f04}
	I0108 13:25:44.684578   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:be:f1:a2:69:d0:dc ID:1,be:f1:a2:69:d0:dc Lease:0x63bb3166}
	I0108 13:25:44.684588   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:ce:11:55:19:1b:bc ID:1,ce:11:55:19:1b:bc Lease:0x63bb3164}
	I0108 13:25:44.684597   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:1a:b6:29:53:dd:44 ID:1,1a:b6:29:53:dd:44 Lease:0x63bb2ae0}
	I0108 13:25:44.684604   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:ea:1b:94:31:e9:2c ID:1,ea:1b:94:31:e9:2c Lease:0x63bb2acb}
	I0108 13:25:44.684610   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:7e:95:4e:60:39:38 ID:1,7e:95:4e:60:39:38 Lease:0x63bb2aa5}
	I0108 13:25:44.684619   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:56:7:65:39:b8:f4 ID:1,56:7:65:39:b8:f4 Lease:0x63bc7bd7}
	I0108 13:25:44.684625   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:a:47:60:13:a:a6 ID:1,a:47:60:13:a:a6 Lease:0x63bc7b95}
	I0108 13:25:44.684632   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:96:54:b2:b:96:5a ID:1,96:54:b2:b:96:5a Lease:0x63bb2a0b}
	I0108 13:25:44.684637   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:22:33:31:80:e5:53 ID:1,22:33:31:80:e5:53 Lease:0x63bc79dc}
	I0108 13:25:44.684642   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:c6:e3:59:ac:dc:8f ID:1,c6:e3:59:ac:dc:8f Lease:0x63bb2851}
	I0108 13:25:44.684651   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:4a:1c:4:a4:25:f5 ID:1,4a:1c:4:a4:25:f5 Lease:0x63bc78c4}
	I0108 13:25:46.686543   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Attempt 2
	I0108 13:25:46.686557   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 13:25:46.686628   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | hyperkit pid from json: 11097
	I0108 13:25:46.687410   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Searching for 4e:f0:b3:1f:f:2b in /var/db/dhcpd_leases ...
	I0108 13:25:46.687458   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0108 13:25:46.687466   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:da:4c:f9:c0:83:47 ID:1,da:4c:f9:c0:83:47 Lease:0x63bc8612}
	I0108 13:25:46.687481   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:a2:44:36:6b:68:b8 ID:1,a2:44:36:6b:68:b8 Lease:0x63bc85ff}
	I0108 13:25:46.687487   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:9a:64:4e:b9:b9:44 ID:1,9a:64:4e:b9:b9:44 Lease:0x63bb3475}
	I0108 13:25:46.687494   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:ba:cc:22:10:41:cb ID:1,ba:cc:22:10:41:cb Lease:0x63bc84e7}
	I0108 13:25:46.687499   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:3e:f7:d1:11:f9:61 ID:1,3e:f7:d1:11:f9:61 Lease:0x63bb334e}
	I0108 13:25:46.687509   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:1a:20:c6:f:e0:2d ID:1,1a:20:c6:f:e0:2d Lease:0x63bc849e}
	I0108 13:25:46.687514   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:7e:2d:4c:da:5f:85 ID:1,7e:2d:4c:da:5f:85 Lease:0x63bb331e}
	I0108 13:25:46.687521   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:ea:2c:fd:1b:d6:7 ID:1,ea:2c:fd:1b:d6:7 Lease:0x63bb3315}
	I0108 13:25:46.687526   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:ea:6f:3b:d4:62:ae ID:1,ea:6f:3b:d4:62:ae Lease:0x63bc8447}
	I0108 13:25:46.687532   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:da:e6:bc:d0:c8:f2 ID:1,da:e6:bc:d0:c8:f2 Lease:0x63bc8436}
	I0108 13:25:46.687540   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:e6:e4:fb:59:30:7a ID:1,e6:e4:fb:59:30:7a Lease:0x63bb32ac}
	I0108 13:25:46.687545   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:56:6e:42:af:88:21 ID:1,56:6e:42:af:88:21 Lease:0x63bc837e}
	I0108 13:25:46.687551   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:fa:f:28:59:92:81 ID:1,fa:f:28:59:92:81 Lease:0x63bc82ef}
	I0108 13:25:46.687558   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:56:aa:6a:b7:76:a0 ID:1,56:aa:6a:b7:76:a0 Lease:0x63bc82be}
	I0108 13:25:46.687564   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ae:fc:4d:f4:df:e0 ID:1,ae:fc:4d:f4:df:e0 Lease:0x63bb2f04}
	I0108 13:25:46.687569   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:be:f1:a2:69:d0:dc ID:1,be:f1:a2:69:d0:dc Lease:0x63bb3166}
	I0108 13:25:46.687584   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:ce:11:55:19:1b:bc ID:1,ce:11:55:19:1b:bc Lease:0x63bb3164}
	I0108 13:25:46.687591   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:1a:b6:29:53:dd:44 ID:1,1a:b6:29:53:dd:44 Lease:0x63bb2ae0}
	I0108 13:25:46.687599   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:ea:1b:94:31:e9:2c ID:1,ea:1b:94:31:e9:2c Lease:0x63bb2acb}
	I0108 13:25:46.687607   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:7e:95:4e:60:39:38 ID:1,7e:95:4e:60:39:38 Lease:0x63bb2aa5}
	I0108 13:25:46.687612   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:56:7:65:39:b8:f4 ID:1,56:7:65:39:b8:f4 Lease:0x63bc7bd7}
	I0108 13:25:46.687617   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:a:47:60:13:a:a6 ID:1,a:47:60:13:a:a6 Lease:0x63bc7b95}
	I0108 13:25:46.687623   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:96:54:b2:b:96:5a ID:1,96:54:b2:b:96:5a Lease:0x63bb2a0b}
	I0108 13:25:46.687629   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:22:33:31:80:e5:53 ID:1,22:33:31:80:e5:53 Lease:0x63bc79dc}
	I0108 13:25:46.687636   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:c6:e3:59:ac:dc:8f ID:1,c6:e3:59:ac:dc:8f Lease:0x63bb2851}
	I0108 13:25:46.687643   11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:4a:1c:4:a4:25:f5 ID:1,4a:1c:4:a4:25:f5 Lease:0x63bc78c4}
	I0108 13:25:45.282957   11017 addons.go:488] enableAddons completed in 861.279533ms
	I0108 13:25:45.388516   11017 pod_ready.go:92] pod "kube-apiserver-pause-132406" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:45.388528   11017 pod_ready.go:81] duration metric: took 399.731294ms waiting for pod "kube-apiserver-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:45.388537   11017 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:45.787890   11017 pod_ready.go:92] pod "kube-controller-manager-pause-132406" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:45.787901   11017 pod_ready.go:81] duration metric: took 399.340179ms waiting for pod "kube-controller-manager-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:45.787908   11017 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c2zj2" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:46.187439   11017 pod_ready.go:92] pod "kube-proxy-c2zj2" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:46.187453   11017 pod_ready.go:81] duration metric: took 399.536729ms waiting for pod "kube-proxy-c2zj2" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:46.187459   11017 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:46.588219   11017 pod_ready.go:92] pod "kube-scheduler-pause-132406" in "kube-system" namespace has status "Ready":"True"
	I0108 13:25:46.588232   11017 pod_ready.go:81] duration metric: took 400.763589ms waiting for pod "kube-scheduler-pause-132406" in "kube-system" namespace to be "Ready" ...
	I0108 13:25:46.588239   11017 pod_ready.go:38] duration metric: took 2.0892776s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 13:25:46.588288   11017 api_server.go:51] waiting for apiserver process to appear ...
	I0108 13:25:46.588361   11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:25:46.598144   11017 api_server.go:71] duration metric: took 2.176485692s to wait for apiserver process to appear ...
	I0108 13:25:46.598158   11017 api_server.go:87] waiting for apiserver healthz status ...
	I0108 13:25:46.598165   11017 api_server.go:252] Checking apiserver healthz at https://192.168.64.27:8443/healthz ...
	I0108 13:25:46.602085   11017 api_server.go:278] https://192.168.64.27:8443/healthz returned 200:
	ok
	I0108 13:25:46.602639   11017 api_server.go:140] control plane version: v1.25.3
	I0108 13:25:46.602648   11017 api_server.go:130] duration metric: took 4.486281ms to wait for apiserver health ...
	I0108 13:25:46.602654   11017 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 13:25:46.791503   11017 system_pods.go:59] 7 kube-system pods found
	I0108 13:25:46.791521   11017 system_pods.go:61] "coredns-565d847f94-t2bdb" [4b1d4603-7531-4c5b-b5d1-17f4712c727e] Running
	I0108 13:25:46.791526   11017 system_pods.go:61] "etcd-pause-132406" [69af71f7-0f42-4ea6-98f6-5720512baa84] Running
	I0108 13:25:46.791529   11017 system_pods.go:61] "kube-apiserver-pause-132406" [e8443dca-cdec-4e05-8ae7-d5ed49988ffa] Running
	I0108 13:25:46.791533   11017 system_pods.go:61] "kube-controller-manager-pause-132406" [01efd276-f21b-4309-ba40-73d8e0790774] Running
	I0108 13:25:46.791538   11017 system_pods.go:61] "kube-proxy-c2zj2" [06f5a965-c191-491e-a8ca-81e45cdab1e0] Running
	I0108 13:25:46.791542   11017 system_pods.go:61] "kube-scheduler-pause-132406" [73b60b1b-4f6f-474f-ba27-15a6c1019ffb] Running
	I0108 13:25:46.791550   11017 system_pods.go:61] "storage-provisioner" [a4d0a073-64e2-44d3-b701-67c31b2c9dcb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 13:25:46.791556   11017 system_pods.go:74] duration metric: took 188.896938ms to wait for pod list to return data ...
	I0108 13:25:46.791561   11017 default_sa.go:34] waiting for default service account to be created ...
	I0108 13:25:46.988179   11017 default_sa.go:45] found service account: "default"
	I0108 13:25:46.988192   11017 default_sa.go:55] duration metric: took 196.618556ms for default service account to be created ...
	I0108 13:25:46.988197   11017 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 13:25:47.191037   11017 system_pods.go:86] 7 kube-system pods found
	I0108 13:25:47.191051   11017 system_pods.go:89] "coredns-565d847f94-t2bdb" [4b1d4603-7531-4c5b-b5d1-17f4712c727e] Running
	I0108 13:25:47.191056   11017 system_pods.go:89] "etcd-pause-132406" [69af71f7-0f42-4ea6-98f6-5720512baa84] Running
	I0108 13:25:47.191059   11017 system_pods.go:89] "kube-apiserver-pause-132406" [e8443dca-cdec-4e05-8ae7-d5ed49988ffa] Running
	I0108 13:25:47.191062   11017 system_pods.go:89] "kube-controller-manager-pause-132406" [01efd276-f21b-4309-ba40-73d8e0790774] Running
	I0108 13:25:47.191068   11017 system_pods.go:89] "kube-proxy-c2zj2" [06f5a965-c191-491e-a8ca-81e45cdab1e0] Running
	I0108 13:25:47.191071   11017 system_pods.go:89] "kube-scheduler-pause-132406" [73b60b1b-4f6f-474f-ba27-15a6c1019ffb] Running
	I0108 13:25:47.191075   11017 system_pods.go:89] "storage-provisioner" [a4d0a073-64e2-44d3-b701-67c31b2c9dcb] Running
	I0108 13:25:47.191079   11017 system_pods.go:126] duration metric: took 202.877582ms to wait for k8s-apps to be running ...
	I0108 13:25:47.191083   11017 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 13:25:47.191143   11017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 13:25:47.200849   11017 system_svc.go:56] duration metric: took 9.761745ms WaitForService to wait for kubelet.
	I0108 13:25:47.200862   11017 kubeadm.go:573] duration metric: took 2.779206372s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 13:25:47.200873   11017 node_conditions.go:102] verifying NodePressure condition ...
	I0108 13:25:47.388983   11017 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 13:25:47.388998   11017 node_conditions.go:123] node cpu capacity is 2
	I0108 13:25:47.389006   11017 node_conditions.go:105] duration metric: took 188.128513ms to run NodePressure ...
	I0108 13:25:47.389012   11017 start.go:217] waiting for startup goroutines ...
	I0108 13:25:47.389347   11017 ssh_runner.go:195] Run: rm -f paused
	I0108 13:25:47.433718   11017 start.go:536] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
	I0108 13:25:47.476634   11017 out.go:177] * Done! kubectl is now configured to use "pause-132406" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sun 2023-01-08 21:24:13 UTC, ends at Sun 2023-01-08 21:25:51 UTC. --
	Jan 08 21:25:25 pause-132406 dockerd[3702]: time="2023-01-08T21:25:25.290659171Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/3314497202fc8ceeb27ca7190e02eafa07a3b2174edff746b07ed7a18bb2797e pid=5489 runtime=io.containerd.runc.v2
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.310282074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.310632315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.310689702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.311253391Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2c864c071578be13dc25e84f4d73ec21beecae7650ed31f40171521323b956bc pid=5652 runtime=io.containerd.runc.v2
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.590607044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.590804961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.590861947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.591114210Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/cfca5ca38f1bb40a1f783df11849538c078a7ea84cd1507a93401e6ac921043c pid=5701 runtime=io.containerd.runc.v2
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.696304305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.696340262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.696348212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.696786183Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/a037098dc5d0363118aa47fc6662a0cb9803f357dbe7488d39ac54fbda264a85 pid=5742 runtime=io.containerd.runc.v2
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.852485421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.852650670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.852752050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.853144561Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2fa6736cca283a0849a4133c4846ff785f1dabecc824ab55422b9fe1df5fb20e pid=5806 runtime=io.containerd.runc.v2
	Jan 08 21:25:45 pause-132406 dockerd[3702]: time="2023-01-08T21:25:45.693887010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:25:45 pause-132406 dockerd[3702]: time="2023-01-08T21:25:45.693975959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:25:45 pause-132406 dockerd[3702]: time="2023-01-08T21:25:45.693985352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:25:45 pause-132406 dockerd[3702]: time="2023-01-08T21:25:45.694546414Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/7509d84ccc611055e0a390b6d4f9edf99f5625ea09b62d1eae87e614b0930aa8 pid=6088 runtime=io.containerd.runc.v2
	Jan 08 21:25:46 pause-132406 dockerd[3702]: time="2023-01-08T21:25:46.042984174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:25:46 pause-132406 dockerd[3702]: time="2023-01-08T21:25:46.043017322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:25:46 pause-132406 dockerd[3702]: time="2023-01-08T21:25:46.043025311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:25:46 pause-132406 dockerd[3702]: time="2023-01-08T21:25:46.043168759Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/47155f3f92e2edb1c9b9544dbec392073d4a267f0e2e171c4c0c8f41eed1b42d pid=6226 runtime=io.containerd.runc.v2
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	47155f3f92e2e       6e38f40d628db       5 seconds ago       Running             storage-provisioner       0                   7509d84ccc611
	2fa6736cca283       5185b96f0becf       21 seconds ago      Running             coredns                   2                   2c864c071578b
	a037098dc5d03       beaaf00edd38a       21 seconds ago      Running             kube-proxy                2                   cfca5ca38f1bb
	3314497202fc8       6d23ec0e8b87e       26 seconds ago      Running             kube-scheduler            3                   7225e68b6cdb9
	2702ef37e8c9f       a8a176a5d5d69       26 seconds ago      Running             etcd                      3                   d6054662c415b
	85b18341d5fa3       6039992312758       27 seconds ago      Running             kube-controller-manager   3                   167990773c8df
	e49c330971e33       0346dbd74bcb9       27 seconds ago      Running             kube-apiserver            3                   7719cf6e2ded6
	6c8e664a440de       6d23ec0e8b87e       29 seconds ago      Created             kube-scheduler            2                   b17d288e92aba
	5836a9370f77e       beaaf00edd38a       29 seconds ago      Created             kube-proxy                1                   d0c6f1675c8df
	bf3a9fcdde4ed       5185b96f0becf       29 seconds ago      Created             coredns                   1                   80b9970570ee9
	359f540cb31f6       6039992312758       30 seconds ago      Created             kube-controller-manager   2                   82b65485dbb4d
	b3ea39090c67a       a8a176a5d5d69       30 seconds ago      Exited              etcd                      2                   d4f72481538e7
	a59e122b43f1f       0346dbd74bcb9       30 seconds ago      Exited              kube-apiserver            2                   b5535145a6cf3
	c2ddc4b3adc5e       5185b96f0becf       56 seconds ago      Exited              coredns                   0                   11b52ad80c153
	
	* 
	* ==> coredns [2fa6736cca28] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7135f430aea492809ab227b028bd16c96f6629e00404d9ec4f44cae029eb3743d1cfe4a9d0cc8fbbd4cfa53556972f2bbf615e7c9e8412e85d290539257166ad
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	
	* 
	* ==> coredns [bf3a9fcdde4e] <==
	* 
	* 
	* ==> coredns [c2ddc4b3adc5] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-132406
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-132406
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286
	                    minikube.k8s.io/name=pause-132406
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_08T13_24_43_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 08 Jan 2023 21:24:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-132406
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 08 Jan 2023 21:25:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 08 Jan 2023 21:25:28 +0000   Sun, 08 Jan 2023 21:24:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 08 Jan 2023 21:25:28 +0000   Sun, 08 Jan 2023 21:24:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 08 Jan 2023 21:25:28 +0000   Sun, 08 Jan 2023 21:24:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 08 Jan 2023 21:25:28 +0000   Sun, 08 Jan 2023 21:25:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.64.27
	  Hostname:    pause-132406
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017572Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017572Ki
	  pods:               110
	System Info:
	  Machine ID:                 7ba663e0089540a7aff02be8cb7e7914
	  System UUID:                c84e11ed-0000-0000-a16b-149d997fca88
	  Boot ID:                    e1c358fb-4be5-406e-aa57-71fbfb8be72e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.21
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-565d847f94-t2bdb                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     58s
	  kube-system                 etcd-pause-132406                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         68s
	  kube-system                 kube-apiserver-pause-132406             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-controller-manager-pause-132406    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-proxy-c2zj2                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 kube-scheduler-pause-132406             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 storage-provisioner                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 57s                kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  NodeHasSufficientPID     69s                kubelet          Node pause-132406 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  69s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  69s                kubelet          Node pause-132406 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    69s                kubelet          Node pause-132406 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                69s                kubelet          Node pause-132406 status is now: NodeReady
	  Normal  Starting                 69s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           58s                node-controller  Node pause-132406 event: Registered Node pause-132406 in Controller
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s (x8 over 29s)  kubelet          Node pause-132406 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s (x8 over 29s)  kubelet          Node pause-132406 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s (x7 over 29s)  kubelet          Node pause-132406 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11s                node-controller  Node pause-132406 event: Registered Node pause-132406 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.891901] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.787838] systemd-fstab-generator[528]: Ignoring "noauto" for root device
	[  +0.090038] systemd-fstab-generator[539]: Ignoring "noauto" for root device
	[  +5.154737] systemd-fstab-generator[759]: Ignoring "noauto" for root device
	[  +1.211845] kauditd_printk_skb: 16 callbacks suppressed
	[  +0.212584] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[  +0.092038] systemd-fstab-generator[932]: Ignoring "noauto" for root device
	[  +0.088864] systemd-fstab-generator[943]: Ignoring "noauto" for root device
	[  +1.451512] systemd-fstab-generator[1094]: Ignoring "noauto" for root device
	[  +0.096327] systemd-fstab-generator[1105]: Ignoring "noauto" for root device
	[  +3.011005] systemd-fstab-generator[1323]: Ignoring "noauto" for root device
	[  +0.609217] kauditd_printk_skb: 68 callbacks suppressed
	[ +14.122766] systemd-fstab-generator[1992]: Ignoring "noauto" for root device
	[ +11.883875] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.253764] systemd-fstab-generator[2883]: Ignoring "noauto" for root device
	[  +0.141331] systemd-fstab-generator[2894]: Ignoring "noauto" for root device
	[Jan 8 21:25] systemd-fstab-generator[2905]: Ignoring "noauto" for root device
	[  +0.401098] kauditd_printk_skb: 18 callbacks suppressed
	[ +16.643218] systemd-fstab-generator[4108]: Ignoring "noauto" for root device
	[  +0.107408] systemd-fstab-generator[4162]: Ignoring "noauto" for root device
	[  +5.496654] systemd-fstab-generator[5099]: Ignoring "noauto" for root device
	[  +6.803519] kauditd_printk_skb: 31 callbacks suppressed
	
	* 
	* ==> etcd [2702ef37e8c9] <==
	* {"level":"info","ts":"2023-01-08T21:25:25.967Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"d9a8ee5ed7997f86","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-01-08T21:25:25.967Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-01-08T21:25:25.968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 switched to configuration voters=(15684047793429249926)"}
	{"level":"info","ts":"2023-01-08T21:25:25.968Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d657f6537ff55566","local-member-id":"d9a8ee5ed7997f86","added-peer-id":"d9a8ee5ed7997f86","added-peer-peer-urls":["https://192.168.64.27:2380"]}
	{"level":"info","ts":"2023-01-08T21:25:25.968Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d657f6537ff55566","local-member-id":"d9a8ee5ed7997f86","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:25:25.968Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:25:25.971Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-08T21:25:25.971Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.64.27:2380"}
	{"level":"info","ts":"2023-01-08T21:25:25.972Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.64.27:2380"}
	{"level":"info","ts":"2023-01-08T21:25:25.972Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d9a8ee5ed7997f86","initial-advertise-peer-urls":["https://192.168.64.27:2380"],"listen-peer-urls":["https://192.168.64.27:2380"],"advertise-client-urls":["https://192.168.64.27:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.27:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-08T21:25:25.972Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 is starting a new election at term 3"}
	{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 received MsgPreVoteResp from d9a8ee5ed7997f86 at term 3"}
	{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 became candidate at term 4"}
	{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 received MsgVoteResp from d9a8ee5ed7997f86 at term 4"}
	{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 became leader at term 4"}
	{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d9a8ee5ed7997f86 elected leader d9a8ee5ed7997f86 at term 4"}
	{"level":"info","ts":"2023-01-08T21:25:26.967Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"d9a8ee5ed7997f86","local-member-attributes":"{Name:pause-132406 ClientURLs:[https://192.168.64.27:2379]}","request-path":"/0/members/d9a8ee5ed7997f86/attributes","cluster-id":"d657f6537ff55566","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-08T21:25:26.967Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T21:25:26.968Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-08T21:25:26.968Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-08T21:25:26.968Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T21:25:26.968Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-08T21:25:26.970Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.64.27:2379"}
	
	* 
	* ==> etcd [b3ea39090c67] <==
	* {"level":"info","ts":"2023-01-08T21:25:22.354Z","caller":"embed/etcd.go:479","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-08T21:25:22.354Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.27:2379"]}
	{"level":"info","ts":"2023-01-08T21:25:22.354Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.4","git-sha":"08407ff76","go-version":"go1.16.15","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-132406","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.64.27:2380"],"listen-peer-urls":["https://192.168.64.27:2380"],"advertise-client-urls":["https://192.168.64.27:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.27:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-size-bytes":2147
483648,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2023-01-08T21:25:22.355Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"417.198µs"}
	{"level":"info","ts":"2023-01-08T21:25:22.363Z","caller":"etcdserver/server.go:529","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2023-01-08T21:25:22.365Z","caller":"etcdserver/raft.go:483","msg":"restarting local member","cluster-id":"d657f6537ff55566","local-member-id":"d9a8ee5ed7997f86","commit-index":399}
	{"level":"info","ts":"2023-01-08T21:25:22.365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 switched to configuration voters=()"}
	{"level":"info","ts":"2023-01-08T21:25:22.365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 became follower at term 3"}
	{"level":"info","ts":"2023-01-08T21:25:22.365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft d9a8ee5ed7997f86 [peers: [], term: 3, commit: 399, applied: 0, lastindex: 399, lastterm: 3]"}
	{"level":"warn","ts":"2023-01-08T21:25:22.366Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2023-01-08T21:25:22.367Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":382}
	{"level":"info","ts":"2023-01-08T21:25:22.368Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2023-01-08T21:25:22.368Z","caller":"etcdserver/corrupt.go:46","msg":"starting initial corruption check","local-member-id":"d9a8ee5ed7997f86","timeout":"7s"}
	{"level":"info","ts":"2023-01-08T21:25:22.369Z","caller":"etcdserver/corrupt.go:116","msg":"initial corruption checking passed; no corruption","local-member-id":"d9a8ee5ed7997f86"}
	{"level":"info","ts":"2023-01-08T21:25:22.369Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"d9a8ee5ed7997f86","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-01-08T21:25:22.369Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-01-08T21:25:22.370Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 switched to configuration voters=(15684047793429249926)"}
	{"level":"info","ts":"2023-01-08T21:25:22.370Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d657f6537ff55566","local-member-id":"d9a8ee5ed7997f86","added-peer-id":"d9a8ee5ed7997f86","added-peer-peer-urls":["https://192.168.64.27:2380"]}
	{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d657f6537ff55566","local-member-id":"d9a8ee5ed7997f86","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d9a8ee5ed7997f86","initial-advertise-peer-urls":["https://192.168.64.27:2380"],"listen-peer-urls":["https://192.168.64.27:2380"],"advertise-client-urls":["https://192.168.64.27:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.27:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.64.27:2380"}
	{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.64.27:2380"}
	
	* 
	* ==> kernel <==
	*  21:25:52 up 1 min,  0 users,  load average: 0.89, 0.30, 0.11
	Linux pause-132406 5.10.57 #1 SMP Sun Jan 8 19:17:02 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [a59e122b43f1] <==
	* 
	* 
	* ==> kube-apiserver [e49c330971e3] <==
	* I0108 21:25:28.700297       1 controller.go:85] Starting OpenAPI V3 controller
	I0108 21:25:28.700429       1 naming_controller.go:291] Starting NamingConditionController
	I0108 21:25:28.700511       1 establishing_controller.go:76] Starting EstablishingController
	I0108 21:25:28.700559       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0108 21:25:28.701254       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0108 21:25:28.701371       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0108 21:25:28.701544       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0108 21:25:28.702007       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0108 21:25:28.782489       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0108 21:25:28.791936       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 21:25:28.792377       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0108 21:25:28.792956       1 cache.go:39] Caches are synced for autoregister controller
	I0108 21:25:28.793154       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0108 21:25:28.795220       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 21:25:28.800502       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0108 21:25:28.858432       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 21:25:29.472691       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0108 21:25:29.697351       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 21:25:30.382302       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0108 21:25:30.390603       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0108 21:25:30.412248       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0108 21:25:30.430489       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 21:25:30.435393       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 21:25:41.061315       1 controller.go:616] quota admission added evaluator for: endpoints
	I0108 21:25:41.169568       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [359f540cb31f] <==
	* 
	* 
	* ==> kube-controller-manager [85b18341d5fa] <==
	* I0108 21:25:41.096790       1 shared_informer.go:262] Caches are synced for expand
	I0108 21:25:41.099296       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0108 21:25:41.115597       1 shared_informer.go:262] Caches are synced for deployment
	I0108 21:25:41.119008       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0108 21:25:41.133526       1 shared_informer.go:262] Caches are synced for node
	I0108 21:25:41.133592       1 range_allocator.go:166] Starting range CIDR allocator
	I0108 21:25:41.133606       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0108 21:25:41.133651       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0108 21:25:41.142840       1 shared_informer.go:262] Caches are synced for daemon sets
	I0108 21:25:41.157497       1 shared_informer.go:262] Caches are synced for taint
	I0108 21:25:41.157759       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	I0108 21:25:41.157993       1 taint_manager.go:209] "Sending events to api server"
	I0108 21:25:41.157772       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	W0108 21:25:41.158417       1 node_lifecycle_controller.go:1058] Missing timestamp for Node pause-132406. Assuming now as a timestamp.
	I0108 21:25:41.158643       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I0108 21:25:41.158045       1 event.go:294] "Event occurred" object="pause-132406" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-132406 event: Registered Node pause-132406 in Controller"
	I0108 21:25:41.160214       1 shared_informer.go:262] Caches are synced for persistent volume
	I0108 21:25:41.161892       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0108 21:25:41.162048       1 shared_informer.go:262] Caches are synced for GC
	I0108 21:25:41.171471       1 shared_informer.go:262] Caches are synced for TTL
	I0108 21:25:41.197886       1 shared_informer.go:262] Caches are synced for resource quota
	I0108 21:25:41.235793       1 shared_informer.go:262] Caches are synced for resource quota
	I0108 21:25:41.610248       1 shared_informer.go:262] Caches are synced for garbage collector
	I0108 21:25:41.657226       1 shared_informer.go:262] Caches are synced for garbage collector
	I0108 21:25:41.657437       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [5836a9370f77] <==
	* 
	* 
	* ==> kube-proxy [a037098dc5d0] <==
	* I0108 21:25:30.850478       1 node.go:163] Successfully retrieved node IP: 192.168.64.27
	I0108 21:25:30.850523       1 server_others.go:138] "Detected node IP" address="192.168.64.27"
	I0108 21:25:30.850546       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0108 21:25:30.900885       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0108 21:25:30.901123       1 server_others.go:206] "Using iptables Proxier"
	I0108 21:25:30.901146       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 21:25:30.902097       1 server.go:661] "Version info" version="v1.25.3"
	I0108 21:25:30.902216       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:25:30.902654       1 config.go:317] "Starting service config controller"
	I0108 21:25:30.902693       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0108 21:25:30.902720       1 config.go:226] "Starting endpoint slice config controller"
	I0108 21:25:30.902731       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0108 21:25:30.903108       1 config.go:444] "Starting node config controller"
	I0108 21:25:30.903471       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0108 21:25:31.002821       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0108 21:25:31.002956       1 shared_informer.go:262] Caches are synced for service config
	I0108 21:25:31.003974       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [3314497202fc] <==
	* I0108 21:25:26.516401       1 serving.go:348] Generated self-signed cert in-memory
	W0108 21:25:28.747634       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0108 21:25:28.747668       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 21:25:28.747676       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0108 21:25:28.747682       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0108 21:25:28.778528       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I0108 21:25:28.778884       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:25:28.780681       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0108 21:25:28.780730       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0108 21:25:28.781271       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 21:25:28.780750       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0108 21:25:28.882423       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [6c8e664a440d] <==
	* 
	* 
	* ==> kubelet <==
	* -- Journal begins at Sun 2023-01-08 21:24:13 UTC, ends at Sun 2023-01-08 21:25:53 UTC. --
	Jan 08 21:25:28 pause-132406 kubelet[5105]: E0108 21:25:28.600279    5105 kubelet.go:2448] "Error getting node" err="node \"pause-132406\" not found"
	Jan 08 21:25:28 pause-132406 kubelet[5105]: E0108 21:25:28.700806    5105 kubelet.go:2448] "Error getting node" err="node \"pause-132406\" not found"
	Jan 08 21:25:28 pause-132406 kubelet[5105]: I0108 21:25:28.801519    5105 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 08 21:25:28 pause-132406 kubelet[5105]: I0108 21:25:28.802334    5105 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 08 21:25:28 pause-132406 kubelet[5105]: I0108 21:25:28.818830    5105 kubelet_node_status.go:108] "Node was previously registered" node="pause-132406"
	Jan 08 21:25:28 pause-132406 kubelet[5105]: I0108 21:25:28.818970    5105 kubelet_node_status.go:73] "Successfully registered node" node="pause-132406"
	Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.648000    5105 apiserver.go:52] "Watching apiserver"
	Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.649885    5105 topology_manager.go:205] "Topology Admit Handler"
	Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.649962    5105 topology_manager.go:205] "Topology Admit Handler"
	Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.709981    5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b1d4603-7531-4c5b-b5d1-17f4712c727e-config-volume\") pod \"coredns-565d847f94-t2bdb\" (UID: \"4b1d4603-7531-4c5b-b5d1-17f4712c727e\") " pod="kube-system/coredns-565d847f94-t2bdb"
	Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.710359    5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm4nq\" (UniqueName: \"kubernetes.io/projected/4b1d4603-7531-4c5b-b5d1-17f4712c727e-kube-api-access-bm4nq\") pod \"coredns-565d847f94-t2bdb\" (UID: \"4b1d4603-7531-4c5b-b5d1-17f4712c727e\") " pod="kube-system/coredns-565d847f94-t2bdb"
	Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.710457    5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/06f5a965-c191-491e-a8ca-81e45cdab1e0-kube-proxy\") pod \"kube-proxy-c2zj2\" (UID: \"06f5a965-c191-491e-a8ca-81e45cdab1e0\") " pod="kube-system/kube-proxy-c2zj2"
	Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.710554    5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06f5a965-c191-491e-a8ca-81e45cdab1e0-xtables-lock\") pod \"kube-proxy-c2zj2\" (UID: \"06f5a965-c191-491e-a8ca-81e45cdab1e0\") " pod="kube-system/kube-proxy-c2zj2"
	Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.710604    5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzq48\" (UniqueName: \"kubernetes.io/projected/06f5a965-c191-491e-a8ca-81e45cdab1e0-kube-api-access-lzq48\") pod \"kube-proxy-c2zj2\" (UID: \"06f5a965-c191-491e-a8ca-81e45cdab1e0\") " pod="kube-system/kube-proxy-c2zj2"
	Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.710707    5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06f5a965-c191-491e-a8ca-81e45cdab1e0-lib-modules\") pod \"kube-proxy-c2zj2\" (UID: \"06f5a965-c191-491e-a8ca-81e45cdab1e0\") " pod="kube-system/kube-proxy-c2zj2"
	Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.710785    5105 reconciler.go:169] "Reconciler: start to sync state"
	Jan 08 21:25:30 pause-132406 kubelet[5105]: I0108 21:25:30.786116    5105 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2c864c071578be13dc25e84f4d73ec21beecae7650ed31f40171521323b956bc"
	Jan 08 21:25:32 pause-132406 kubelet[5105]: I0108 21:25:32.815079    5105 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Jan 08 21:25:38 pause-132406 kubelet[5105]: I0108 21:25:38.949973    5105 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Jan 08 21:25:45 pause-132406 kubelet[5105]: I0108 21:25:45.283961    5105 topology_manager.go:205] "Topology Admit Handler"
	Jan 08 21:25:45 pause-132406 kubelet[5105]: E0108 21:25:45.284028    5105 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="877d71f1-d869-4d8d-8534-9b676cc5beb0" containerName="coredns"
	Jan 08 21:25:45 pause-132406 kubelet[5105]: I0108 21:25:45.284048    5105 memory_manager.go:345] "RemoveStaleState removing state" podUID="877d71f1-d869-4d8d-8534-9b676cc5beb0" containerName="coredns"
	Jan 08 21:25:45 pause-132406 kubelet[5105]: I0108 21:25:45.379908    5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a4d0a073-64e2-44d3-b701-67c31b2c9dcb-tmp\") pod \"storage-provisioner\" (UID: \"a4d0a073-64e2-44d3-b701-67c31b2c9dcb\") " pod="kube-system/storage-provisioner"
	Jan 08 21:25:45 pause-132406 kubelet[5105]: I0108 21:25:45.380046    5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5dd8\" (UniqueName: \"kubernetes.io/projected/a4d0a073-64e2-44d3-b701-67c31b2c9dcb-kube-api-access-b5dd8\") pod \"storage-provisioner\" (UID: \"a4d0a073-64e2-44d3-b701-67c31b2c9dcb\") " pod="kube-system/storage-provisioner"
	Jan 08 21:25:45 pause-132406 kubelet[5105]: I0108 21:25:45.964235    5105 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="7509d84ccc611055e0a390b6d4f9edf99f5625ea09b62d1eae87e614b0930aa8"
	
	* 
	* ==> storage-provisioner [47155f3f92e2] <==
	* I0108 21:25:46.098217       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 21:25:46.107255       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 21:25:46.107432       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 21:25:46.112096       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 21:25:46.112481       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-132406_786bf454-c8d9-4c47-a499-d6161363a1e5!
	I0108 21:25:46.113189       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4245f6bb-b0ff-44ce-bc47-687e46bad904", APIVersion:"v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-132406_786bf454-c8d9-4c47-a499-d6161363a1e5 became leader
	I0108 21:25:46.217361       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-132406_786bf454-c8d9-4c47-a499-d6161363a1e5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-132406 -n pause-132406
helpers_test.go:261: (dbg) Run:  kubectl --context pause-132406 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context pause-132406 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-132406 describe pod : exit status 1 (39.902902ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context pause-132406 describe pod : exit status 1
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (55.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (54.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.095843444s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0108 13:36:12.506379    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/custom-flannel-131629/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0108 13:36:16.002059    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/skaffold-131513/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.110688164s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.109340167s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.110770983s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0108 13:36:32.988360    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/custom-flannel-131629/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.106764683s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0108 13:36:42.589299    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/cilium-131629/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.110568821s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0108 13:36:55.606481    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/auto-131629/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.110431822s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:243: failed to connect via pod host: exit status 1
--- FAIL: TestNetworkPlugins/group/kubenet/HairPin (54.16s)

                                                
                                    

Test pass (283/301)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 30.41
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.32
10 TestDownloadOnly/v1.25.3/json-events 21.94
11 TestDownloadOnly/v1.25.3/preload-exists 0
14 TestDownloadOnly/v1.25.3/kubectl 0
15 TestDownloadOnly/v1.25.3/LogsDuration 0.3
16 TestDownloadOnly/DeleteAll 0.41
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.39
19 TestBinaryMirror 0.96
20 TestOffline 67.55
22 TestAddons/Setup 130.51
24 TestAddons/parallel/Registry 16.03
25 TestAddons/parallel/Ingress 20.63
26 TestAddons/parallel/MetricsServer 5.51
27 TestAddons/parallel/HelmTiller 12.63
29 TestAddons/parallel/CSI 46.29
30 TestAddons/parallel/Headlamp 10.33
31 TestAddons/parallel/CloudSpanner 5.32
34 TestAddons/serial/GCPAuth/Namespaces 0.1
35 TestAddons/StoppedEnableDisable 8.61
36 TestCertOptions 47.7
37 TestCertExpiration 256.69
38 TestDockerFlags 46.57
39 TestForceSystemdFlag 40.51
40 TestForceSystemdEnv 46.8
42 TestHyperKitDriverInstallOrUpdate 7.3
45 TestErrorSpam/setup 37.94
46 TestErrorSpam/start 1.48
47 TestErrorSpam/status 0.48
48 TestErrorSpam/pause 1.31
49 TestErrorSpam/unpause 1.39
50 TestErrorSpam/stop 3.69
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 91.77
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 45.75
57 TestFunctional/serial/KubeContext 0.04
58 TestFunctional/serial/KubectlGetPods 0.07
61 TestFunctional/serial/CacheCmd/cache/add_remote 8.08
62 TestFunctional/serial/CacheCmd/cache/add_local 1.45
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
64 TestFunctional/serial/CacheCmd/cache/list 0.08
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.17
66 TestFunctional/serial/CacheCmd/cache/cache_reload 1.83
67 TestFunctional/serial/CacheCmd/cache/delete 0.18
68 TestFunctional/serial/MinikubeKubectlCmd 0.49
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.68
70 TestFunctional/serial/ExtraConfig 51.36
71 TestFunctional/serial/ComponentHealth 0.05
72 TestFunctional/serial/LogsCmd 2.61
73 TestFunctional/serial/LogsFileCmd 2.68
75 TestFunctional/parallel/ConfigCmd 0.52
76 TestFunctional/parallel/DashboardCmd 8.32
77 TestFunctional/parallel/DryRun 0.91
78 TestFunctional/parallel/InternationalLanguage 0.49
79 TestFunctional/parallel/StatusCmd 0.49
82 TestFunctional/parallel/ServiceCmd 13.3
83 TestFunctional/parallel/ServiceCmdConnect 7.57
84 TestFunctional/parallel/AddonsCmd 0.29
85 TestFunctional/parallel/PersistentVolumeClaim 26.65
87 TestFunctional/parallel/SSHCmd 0.28
88 TestFunctional/parallel/CpCmd 0.63
89 TestFunctional/parallel/MySQL 25.21
90 TestFunctional/parallel/FileSync 0.19
91 TestFunctional/parallel/CertSync 1.11
95 TestFunctional/parallel/NodeLabels 0.08
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.12
99 TestFunctional/parallel/License 0.83
100 TestFunctional/parallel/Version/short 0.1
101 TestFunctional/parallel/Version/components 0.5
102 TestFunctional/parallel/ImageCommands/ImageListShort 0.15
103 TestFunctional/parallel/ImageCommands/ImageListTable 0.17
104 TestFunctional/parallel/ImageCommands/ImageListJson 0.16
105 TestFunctional/parallel/ImageCommands/ImageListYaml 0.15
106 TestFunctional/parallel/ImageCommands/ImageBuild 4
107 TestFunctional/parallel/ImageCommands/Setup 3.14
108 TestFunctional/parallel/DockerEnv/bash 0.75
109 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
112 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.08
113 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.09
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.14
115 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.98
116 TestFunctional/parallel/ImageCommands/ImageRemove 0.39
117 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.45
118 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.02
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.14
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
127 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
130 TestFunctional/parallel/ProfileCmd/profile_list 0.29
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
132 TestFunctional/parallel/MountCmd/any-port 8.09
133 TestFunctional/parallel/MountCmd/specific-port 1.72
134 TestFunctional/delete_addon-resizer_images 0.15
135 TestFunctional/delete_my-image_image 0.06
136 TestFunctional/delete_minikube_cached_images 0.06
139 TestIngressAddonLegacy/StartLegacyK8sCluster 106.7
141 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.75
142 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.53
143 TestIngressAddonLegacy/serial/ValidateIngressAddons 37.52
146 TestJSONOutput/start/Command 54.58
147 TestJSONOutput/start/Audit 0
149 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
150 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
152 TestJSONOutput/pause/Command 0.9
153 TestJSONOutput/pause/Audit 0
155 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
156 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
158 TestJSONOutput/unpause/Command 0.45
159 TestJSONOutput/unpause/Audit 0
161 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
162 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
164 TestJSONOutput/stop/Command 8.17
165 TestJSONOutput/stop/Audit 0
167 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
168 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
169 TestErrorJSONOutput 0.76
173 TestMainNoArgs 0.08
174 TestMinikubeProfile 93.96
177 TestMountStart/serial/StartWithMountFirst 15.01
178 TestMountStart/serial/VerifyMountFirst 0.3
179 TestMountStart/serial/StartWithMountSecond 14.59
180 TestMountStart/serial/VerifyMountSecond 0.32
181 TestMountStart/serial/DeleteFirst 2.4
182 TestMountStart/serial/VerifyMountPostDelete 0.31
183 TestMountStart/serial/Stop 2.22
184 TestMountStart/serial/RestartStopped 15.96
185 TestMountStart/serial/VerifyMountPostStop 0.29
188 TestMultiNode/serial/FreshStart2Nodes 126.85
189 TestMultiNode/serial/DeployApp2Nodes 5.5
190 TestMultiNode/serial/PingHostFrom2Pods 0.87
191 TestMultiNode/serial/AddNode 43.71
192 TestMultiNode/serial/ProfileList 0.21
193 TestMultiNode/serial/CopyFile 5.41
194 TestMultiNode/serial/StopNode 2.67
195 TestMultiNode/serial/StartAfterStop 30.91
196 TestMultiNode/serial/RestartKeepsNodes 838.48
197 TestMultiNode/serial/DeleteNode 4.93
198 TestMultiNode/serial/StopMultiNode 3.5
199 TestMultiNode/serial/RestartMultiNode 555.15
200 TestMultiNode/serial/ValidateNameConflict 47.41
204 TestPreload 142.44
206 TestScheduledStopUnix 108.13
207 TestSkaffold 75.55
210 TestRunningBinaryUpgrade 156.76
212 TestKubernetesUpgrade 138.23
225 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.92
226 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.11
227 TestStoppedBinaryUpgrade/Setup 1.96
228 TestStoppedBinaryUpgrade/Upgrade 180.28
230 TestPause/serial/Start 52.86
232 TestStoppedBinaryUpgrade/MinikubeLogs 3.12
241 TestNoKubernetes/serial/StartNoK8sWithVersion 0.44
242 TestNoKubernetes/serial/StartWithK8s 43.72
243 TestNetworkPlugins/group/auto/Start 55.89
244 TestNoKubernetes/serial/StartWithStopK8s 16.38
245 TestNoKubernetes/serial/Start 14.29
246 TestNetworkPlugins/group/auto/KubeletFlags 0.15
247 TestNetworkPlugins/group/auto/NetCatPod 11.19
248 TestNoKubernetes/serial/VerifyK8sNotRunning 0.12
249 TestNoKubernetes/serial/ProfileList 0.53
250 TestNoKubernetes/serial/Stop 2.23
251 TestNoKubernetes/serial/StartNoArgs 14.8
252 TestNetworkPlugins/group/auto/DNS 0.13
253 TestNetworkPlugins/group/auto/Localhost 0.1
254 TestNetworkPlugins/group/auto/HairPin 5.11
255 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.12
256 TestNetworkPlugins/group/cilium/Start 103.17
257 TestNetworkPlugins/group/calico/Start 316.44
258 TestNetworkPlugins/group/cilium/ControllerPod 5.01
259 TestNetworkPlugins/group/cilium/KubeletFlags 0.16
260 TestNetworkPlugins/group/cilium/NetCatPod 12.67
261 TestNetworkPlugins/group/cilium/DNS 0.15
262 TestNetworkPlugins/group/cilium/Localhost 0.11
263 TestNetworkPlugins/group/cilium/HairPin 0.1
264 TestNetworkPlugins/group/custom-flannel/Start 91.2
265 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.15
266 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.22
267 TestNetworkPlugins/group/custom-flannel/DNS 0.11
268 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
269 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
270 TestNetworkPlugins/group/false/Start 53.76
271 TestNetworkPlugins/group/false/KubeletFlags 0.19
272 TestNetworkPlugins/group/false/NetCatPod 11.19
273 TestNetworkPlugins/group/false/DNS 0.13
274 TestNetworkPlugins/group/false/Localhost 0.1
275 TestNetworkPlugins/group/false/HairPin 5.1
276 TestNetworkPlugins/group/kindnet/Start 62.09
277 TestNetworkPlugins/group/calico/ControllerPod 5.01
278 TestNetworkPlugins/group/calico/KubeletFlags 0.18
279 TestNetworkPlugins/group/calico/NetCatPod 12.36
280 TestNetworkPlugins/group/calico/DNS 0.16
281 TestNetworkPlugins/group/calico/Localhost 0.11
282 TestNetworkPlugins/group/calico/HairPin 0.1
283 TestNetworkPlugins/group/flannel/Start 57.95
284 TestNetworkPlugins/group/kindnet/ControllerPod 5.01
285 TestNetworkPlugins/group/kindnet/KubeletFlags 0.16
286 TestNetworkPlugins/group/kindnet/NetCatPod 11.19
287 TestNetworkPlugins/group/kindnet/DNS 0.12
288 TestNetworkPlugins/group/kindnet/Localhost 0.11
289 TestNetworkPlugins/group/kindnet/HairPin 0.1
290 TestNetworkPlugins/group/enable-default-cni/Start 56.87
291 TestNetworkPlugins/group/flannel/ControllerPod 5.01
292 TestNetworkPlugins/group/flannel/KubeletFlags 0.16
293 TestNetworkPlugins/group/flannel/NetCatPod 12.19
294 TestNetworkPlugins/group/flannel/DNS 0.12
295 TestNetworkPlugins/group/flannel/Localhost 0.1
296 TestNetworkPlugins/group/flannel/HairPin 0.1
297 TestNetworkPlugins/group/bridge/Start 54.38
298 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.16
299 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.21
300 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
301 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
302 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
303 TestNetworkPlugins/group/kubenet/Start 52.68
304 TestNetworkPlugins/group/bridge/KubeletFlags 0.17
305 TestNetworkPlugins/group/bridge/NetCatPod 13.18
306 TestNetworkPlugins/group/bridge/DNS 0.12
307 TestNetworkPlugins/group/bridge/Localhost 0.11
308 TestNetworkPlugins/group/bridge/HairPin 0.12
310 TestStartStop/group/old-k8s-version/serial/FirstStart 156.26
311 TestNetworkPlugins/group/kubenet/KubeletFlags 0.15
312 TestNetworkPlugins/group/kubenet/NetCatPod 12.19
313 TestNetworkPlugins/group/kubenet/DNS 0.12
314 TestNetworkPlugins/group/kubenet/Localhost 0.1
317 TestStartStop/group/no-preload/serial/FirstStart 66.15
318 TestStartStop/group/old-k8s-version/serial/DeployApp 9.31
319 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.72
320 TestStartStop/group/old-k8s-version/serial/Stop 1.24
321 TestStartStop/group/no-preload/serial/DeployApp 10.31
322 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.34
323 TestStartStop/group/old-k8s-version/serial/SecondStart 453.55
324 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.66
325 TestStartStop/group/no-preload/serial/Stop 8.24
326 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.3
327 TestStartStop/group/no-preload/serial/SecondStart 313.89
328 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.01
329 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
330 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.18
331 TestStartStop/group/no-preload/serial/Pause 1.93
333 TestStartStop/group/embed-certs/serial/FirstStart 57.97
334 TestStartStop/group/embed-certs/serial/DeployApp 10.27
335 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.71
336 TestStartStop/group/embed-certs/serial/Stop 8.25
337 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.33
338 TestStartStop/group/embed-certs/serial/SecondStart 324.38
339 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
340 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
341 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.18
342 TestStartStop/group/old-k8s-version/serial/Pause 1.84
344 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 64.93
345 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.27
346 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.63
347 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.24
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.3
349 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 320.73
350 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
351 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
352 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.18
353 TestStartStop/group/embed-certs/serial/Pause 1.87
355 TestStartStop/group/newest-cni/serial/FirstStart 53.04
356 TestStartStop/group/newest-cni/serial/DeployApp 0
357 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.71
358 TestStartStop/group/newest-cni/serial/Stop 8.32
359 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.37
360 TestStartStop/group/newest-cni/serial/SecondStart 31.78
361 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
362 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
363 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.18
364 TestStartStop/group/newest-cni/serial/Pause 1.8
365 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
366 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
367 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.18
368 TestStartStop/group/default-k8s-diff-port/serial/Pause 1.83
x
+
TestDownloadOnly/v1.16.0/json-events (30.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-122642 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-122642 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit : (30.411961228s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (30.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-122642
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-122642: exit status 85 (316.127236ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-122642 | jenkins | v1.28.0 | 08 Jan 23 12:26 PST |          |
	|         | -p download-only-122642        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 12:26:43
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 12:26:43.028279    4203 out.go:296] Setting OutFile to fd 1 ...
	I0108 12:26:43.028703    4203 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:26:43.028709    4203 out.go:309] Setting ErrFile to fd 2...
	I0108 12:26:43.028712    4203 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:26:43.028825    4203 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3013/.minikube/bin
	W0108 12:26:43.028934    4203 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15565-3013/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15565-3013/.minikube/config/config.json: no such file or directory
	I0108 12:26:43.029703    4203 out.go:303] Setting JSON to true
	I0108 12:26:43.048405    4203 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1577,"bootTime":1673208026,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0108 12:26:43.048518    4203 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0108 12:26:43.071516    4203 out.go:97] [download-only-122642] minikube v1.28.0 on Darwin 13.0.1
	I0108 12:26:43.071727    4203 notify.go:220] Checking for updates...
	W0108 12:26:43.071794    4203 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15565-3013/.minikube/cache/preloaded-tarball: no such file or directory
	I0108 12:26:43.091867    4203 out.go:169] MINIKUBE_LOCATION=15565
	I0108 12:26:43.113174    4203 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15565-3013/kubeconfig
	I0108 12:26:43.135319    4203 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 12:26:43.157162    4203 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 12:26:43.179297    4203 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3013/.minikube
	W0108 12:26:43.221873    4203 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 12:26:43.222277    4203 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 12:26:43.386044    4203 out.go:97] Using the hyperkit driver based on user configuration
	I0108 12:26:43.386084    4203 start.go:294] selected driver: hyperkit
	I0108 12:26:43.386096    4203 start.go:838] validating driver "hyperkit" against <nil>
	I0108 12:26:43.386219    4203 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 12:26:43.386608    4203 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15565-3013/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0108 12:26:43.524223    4203 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.28.0
	I0108 12:26:43.528932    4203 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 12:26:43.528959    4203 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0108 12:26:43.528999    4203 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I0108 12:26:43.532713    4203 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0108 12:26:43.532816    4203 start_flags.go:892] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 12:26:43.532843    4203 cni.go:95] Creating CNI manager for ""
	I0108 12:26:43.532854    4203 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 12:26:43.532873    4203 start_flags.go:317] config:
	{Name:download-only-122642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-122642 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 12:26:43.533105    4203 iso.go:125] acquiring lock: {Name:mk509bccdb22b8c95ebe7c0f784c1151265efda4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 12:26:43.554735    4203 out.go:97] Downloading VM boot image ...
	I0108 12:26:43.554852    4203 download.go:101] Downloading: https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.28.0-1673190013-15565-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.28.0-1673190013-15565-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/15565-3013/.minikube/cache/iso/amd64/minikube-v1.28.0-1673190013-15565-amd64.iso
	I0108 12:26:55.479373    4203 out.go:97] Starting control plane node download-only-122642 in cluster download-only-122642
	I0108 12:26:55.479470    4203 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 12:26:55.567215    4203 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0108 12:26:55.567258    4203 cache.go:57] Caching tarball of preloaded images
	I0108 12:26:55.567630    4203 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 12:26:55.588395    4203 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0108 12:26:55.588496    4203 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0108 12:26:55.795493    4203 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/15565-3013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0108 12:27:08.016108    4203 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0108 12:27:08.016298    4203 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15565-3013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0108 12:27:08.556176    4203 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0108 12:27:08.556404    4203 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/download-only-122642/config.json ...
	I0108 12:27:08.556435    4203 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/download-only-122642/config.json: {Name:mk5e72913dafaf966f602107de96304cc878f0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 12:27:08.556773    4203 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 12:27:08.557195    4203 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15565-3013/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-122642"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/json-events (21.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-122642 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-122642 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=hyperkit : (21.944613032s)
--- PASS: TestDownloadOnly/v1.25.3/json-events (21.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/preload-exists
--- PASS: TestDownloadOnly/v1.25.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/kubectl
--- PASS: TestDownloadOnly/v1.25.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-122642
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-122642: exit status 85 (295.580028ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-122642 | jenkins | v1.28.0 | 08 Jan 23 12:26 PST |          |
	|         | -p download-only-122642        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-122642 | jenkins | v1.28.0 | 08 Jan 23 12:27 PST |          |
	|         | -p download-only-122642        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.25.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 12:27:13
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 12:27:13.764930    4243 out.go:296] Setting OutFile to fd 1 ...
	I0108 12:27:13.765196    4243 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:27:13.765202    4243 out.go:309] Setting ErrFile to fd 2...
	I0108 12:27:13.765206    4243 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:27:13.765319    4243 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3013/.minikube/bin
	W0108 12:27:13.765416    4243 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15565-3013/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15565-3013/.minikube/config/config.json: no such file or directory
	I0108 12:27:13.765788    4243 out.go:303] Setting JSON to true
	I0108 12:27:13.784344    4243 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1607,"bootTime":1673208026,"procs":418,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0108 12:27:13.784439    4243 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0108 12:27:13.806455    4243 out.go:97] [download-only-122642] minikube v1.28.0 on Darwin 13.0.1
	I0108 12:27:13.806655    4243 notify.go:220] Checking for updates...
	I0108 12:27:13.828023    4243 out.go:169] MINIKUBE_LOCATION=15565
	I0108 12:27:13.849194    4243 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15565-3013/kubeconfig
	I0108 12:27:13.871432    4243 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 12:27:13.893363    4243 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 12:27:13.915064    4243 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3013/.minikube
	W0108 12:27:13.956972    4243 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 12:27:13.957394    4243 config.go:180] Loaded profile config "download-only-122642": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0108 12:27:13.957443    4243 start.go:746] api.Load failed for download-only-122642: filestore "download-only-122642": Docker machine "download-only-122642" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 12:27:13.957483    4243 driver.go:365] Setting default libvirt URI to qemu:///system
	W0108 12:27:13.957506    4243 start.go:746] api.Load failed for download-only-122642: filestore "download-only-122642": Docker machine "download-only-122642" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 12:27:13.985056    4243 out.go:97] Using the hyperkit driver based on existing profile
	I0108 12:27:13.985128    4243 start.go:294] selected driver: hyperkit
	I0108 12:27:13.985145    4243 start.go:838] validating driver "hyperkit" against &{Name:download-only-122642 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.28.0-1673190013-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-122642 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 12:27:13.985403    4243 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 12:27:13.985610    4243 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15565-3013/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0108 12:27:13.993894    4243 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.28.0
	I0108 12:27:13.997111    4243 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 12:27:13.997128    4243 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0108 12:27:13.999312    4243 cni.go:95] Creating CNI manager for ""
	I0108 12:27:13.999327    4243 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 12:27:13.999349    4243 start_flags.go:317] config:
	{Name:download-only-122642 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.28.0-1673190013-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:download-only-122642 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 12:27:13.999461    4243 iso.go:125] acquiring lock: {Name:mk509bccdb22b8c95ebe7c0f784c1151265efda4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 12:27:14.021112    4243 out.go:97] Starting control plane node download-only-122642 in cluster download-only-122642
	I0108 12:27:14.021227    4243 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 12:27:14.109944    4243 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0108 12:27:14.109984    4243 cache.go:57] Caching tarball of preloaded images
	I0108 12:27:14.110399    4243 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 12:27:14.132074    4243 out.go:97] Downloading Kubernetes v1.25.3 preload ...
	I0108 12:27:14.132171    4243 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
	I0108 12:27:14.346969    4243 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4?checksum=md5:624cb874287e7e3d793b79e4205a7f98 -> /Users/jenkins/minikube-integration/15565-3013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0108 12:27:31.400830    4243 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
	I0108 12:27:31.401019    4243 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15565-3013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
	I0108 12:27:31.994386    4243 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0108 12:27:31.994492    4243 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/download-only-122642/config.json ...
	I0108 12:27:31.994906    4243 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 12:27:31.995152    4243 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/15565-3013/.minikube/cache/darwin/amd64/v1.25.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-122642"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.3/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.41s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.41s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-122642
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                    
x
+
TestBinaryMirror (0.96s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-122737 --alsologtostderr --binary-mirror http://127.0.0.1:49364 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-122737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-122737
--- PASS: TestBinaryMirror (0.96s)

                                                
                                    
x
+
TestOffline (67.55s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-131629 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-131629 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : (1m2.283333345s)
helpers_test.go:175: Cleaning up "offline-docker-131629" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-131629

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-131629: (5.270471019s)
--- PASS: TestOffline (67.55s)

                                                
                                    
x
+
TestAddons/Setup (130.51s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-122738 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p addons-122738 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m10.5134544s)
--- PASS: TestAddons/Setup (130.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:287: registry stabilized in 6.784985ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-sgrts" [47fecc98-4320-4e56-8077-35d14f70ac7d] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.009523921s
addons_test.go:292: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-2rh9d" [2ddbde6e-1b13-4482-a8c1-cf5ad5a5dbf5] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:292: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008049255s
addons_test.go:297: (dbg) Run:  kubectl --context addons-122738 delete po -l run=registry-test --now
addons_test.go:302: (dbg) Run:  kubectl --context addons-122738 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:302: (dbg) Done: kubectl --context addons-122738 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.40516325s)
addons_test.go:316: (dbg) Run:  out/minikube-darwin-amd64 -p addons-122738 ip
2023/01/08 12:30:04 [DEBUG] GET http://192.168.64.2:5000
addons_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p addons-122738 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.03s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:169: (dbg) Run:  kubectl --context addons-122738 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:189: (dbg) Run:  kubectl --context addons-122738 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:202: (dbg) Run:  kubectl --context addons-122738 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:207: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [7fdeb347-31a8-408b-a2f9-8eb3b33b87eb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [7fdeb347-31a8-408b-a2f9-8eb3b33b87eb] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.008351664s
addons_test.go:219: (dbg) Run:  out/minikube-darwin-amd64 -p addons-122738 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:243: (dbg) Run:  kubectl --context addons-122738 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p addons-122738 ip
addons_test.go:254: (dbg) Run:  nslookup hello-john.test 192.168.64.2
addons_test.go:263: (dbg) Run:  out/minikube-darwin-amd64 -p addons-122738 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:268: (dbg) Run:  out/minikube-darwin-amd64 -p addons-122738 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:268: (dbg) Done: out/minikube-darwin-amd64 -p addons-122738 addons disable ingress --alsologtostderr -v=1: (7.388291606s)
--- PASS: TestAddons/parallel/Ingress (20.63s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:364: metrics-server stabilized in 1.81926ms
addons_test.go:366: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-56c6cfbdd9-jvxf6" [eb4671d5-41d8-4eb2-8150-cf4dd4856c10] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.010677959s
addons_test.go:372: (dbg) Run:  kubectl --context addons-122738 top pods -n kube-system
addons_test.go:389: (dbg) Run:  out/minikube-darwin-amd64 -p addons-122738 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.51s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.63s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:413: tiller-deploy stabilized in 2.103131ms
addons_test.go:415: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-696b5bfbb7-8c62x" [8cb9e955-0904-4155-afd8-0316cb541758] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:415: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.011197197s
addons_test.go:430: (dbg) Run:  kubectl --context addons-122738 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:430: (dbg) Done: kubectl --context addons-122738 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.240538889s)
addons_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p addons-122738 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.63s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:518: csi-hostpath-driver pods stabilized in 3.143418ms
addons_test.go:521: (dbg) Run:  kubectl --context addons-122738 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:526: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-122738 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:531: (dbg) Run:  kubectl --context addons-122738 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:536: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [3e66136f-3abc-484e-9f2f-a68985c42560] Pending
helpers_test.go:342: "task-pv-pod" [3e66136f-3abc-484e-9f2f-a68985c42560] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [3e66136f-3abc-484e-9f2f-a68985c42560] Running
addons_test.go:536: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 22.012787277s
addons_test.go:541: (dbg) Run:  kubectl --context addons-122738 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:546: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-122738 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-122738 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:551: (dbg) Run:  kubectl --context addons-122738 delete pod task-pv-pod
addons_test.go:551: (dbg) Done: kubectl --context addons-122738 delete pod task-pv-pod: (1.033655077s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-122738 delete pvc hpvc
addons_test.go:563: (dbg) Run:  kubectl --context addons-122738 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-122738 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-122738 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [9a5070a2-0720-46ff-aaea-8ae60d1e70a4] Pending
helpers_test.go:342: "task-pv-pod-restore" [9a5070a2-0720-46ff-aaea-8ae60d1e70a4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:342: "task-pv-pod-restore" [9a5070a2-0720-46ff-aaea-8ae60d1e70a4] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 13.007830701s
addons_test.go:583: (dbg) Run:  kubectl --context addons-122738 delete pod task-pv-pod-restore
addons_test.go:587: (dbg) Run:  kubectl --context addons-122738 delete pvc hpvc-restore
addons_test.go:591: (dbg) Run:  kubectl --context addons-122738 delete volumesnapshot new-snapshot-demo
addons_test.go:595: (dbg) Run:  out/minikube-darwin-amd64 -p addons-122738 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:595: (dbg) Done: out/minikube-darwin-amd64 -p addons-122738 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.64230185s)
addons_test.go:599: (dbg) Run:  out/minikube-darwin-amd64 -p addons-122738 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (46.29s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:774: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-122738 --alsologtostderr -v=1
addons_test.go:774: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-122738 --alsologtostderr -v=1: (1.319261411s)
addons_test.go:779: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-764769c887-kmspj" [4e881581-86df-42ac-b3f8-603707329e44] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-764769c887-kmspj" [4e881581-86df-42ac-b3f8-603707329e44] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:779: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.008702998s
--- PASS: TestAddons/parallel/Headlamp (10.33s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:795: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
helpers_test.go:342: "cloud-spanner-emulator-7d7766f55c-jjqsd" [f75fdbdc-efe8-462c-a57a-daa4a72dcbc1] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:795: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.007932879s
addons_test.go:798: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-122738
--- PASS: TestAddons/parallel/CloudSpanner (5.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:607: (dbg) Run:  kubectl --context addons-122738 create ns new-namespace
addons_test.go:621: (dbg) Run:  kubectl --context addons-122738 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (8.61s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:139: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-122738
addons_test.go:139: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-122738: (8.242124301s)
addons_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-122738
addons_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-122738
--- PASS: TestAddons/StoppedEnableDisable (8.61s)

                                                
                                    
x
+
TestCertOptions (47.7s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-131823 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-131823 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : (43.874021677s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-131823 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-131823 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-131823 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-131823" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-131823
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-131823: (3.48131881s)
--- PASS: TestCertOptions (47.70s)

                                                
                                    
x
+
TestCertExpiration (256.69s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-131814 --memory=2048 --cert-expiration=3m --driver=hyperkit 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-131814 --memory=2048 --cert-expiration=3m --driver=hyperkit : (40.465183259s)
E0108 13:19:00.522160    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-131814 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E0108 13:21:56.969905    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/skaffold-131513/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-131814 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (30.922510459s)
helpers_test.go:175: Cleaning up "cert-expiration-131814" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-131814
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-131814: (5.303740912s)
--- PASS: TestCertExpiration (256.69s)

                                                
                                    
x
+
TestDockerFlags (46.57s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-131736 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-131736 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : (42.887026829s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-131736 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-131736 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-131736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-131736
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-131736: (3.370851344s)
--- PASS: TestDockerFlags (46.57s)

                                                
                                    
x
+
TestForceSystemdFlag (40.51s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-131733 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-131733 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : (37.003191283s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-131733 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-131733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-131733
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-131733: (3.333016238s)
--- PASS: TestForceSystemdFlag (40.51s)

                                                
                                    
x
+
TestForceSystemdEnv (46.8s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-131646 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
E0108 13:17:03.577611    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-131646 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : (41.318457323s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-131646 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-131646" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-131646

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-131646: (5.310790639s)
--- PASS: TestForceSystemdEnv (46.80s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.3s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.30s)

                                                
                                    
x
+
TestErrorSpam/setup (37.94s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-123132 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-123132 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-123132 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-123132 --driver=hyperkit : (37.937571416s)
--- PASS: TestErrorSpam/setup (37.94s)

                                                
                                    
x
+
TestErrorSpam/start (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123132 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-123132 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123132 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-123132 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123132 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-123132 start --dry-run
--- PASS: TestErrorSpam/start (1.48s)

                                                
                                    
x
+
TestErrorSpam/status (0.48s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123132 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-123132 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123132 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-123132 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123132 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-123132 status
--- PASS: TestErrorSpam/status (0.48s)

                                                
                                    
x
+
TestErrorSpam/pause (1.31s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123132 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-123132 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123132 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-123132 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123132 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-123132 pause
--- PASS: TestErrorSpam/pause (1.31s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123132 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-123132 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123132 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-123132 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123132 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-123132 unpause
--- PASS: TestErrorSpam/unpause (1.39s)

                                                
                                    
x
+
TestErrorSpam/stop (3.69s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123132 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-123132 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-123132 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-123132 stop: (3.246222475s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123132 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-123132 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123132 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-123132 stop
--- PASS: TestErrorSpam/stop (3.69s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /Users/jenkins/minikube-integration/15565-3013/.minikube/files/etc/test/nested/copy/4201/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (91.77s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-123219 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
functional_test.go:2161: (dbg) Done: out/minikube-darwin-amd64 start -p functional-123219 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (1m31.769002617s)
--- PASS: TestFunctional/serial/StartWithProxy (91.77s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (45.75s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-123219 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-darwin-amd64 start -p functional-123219 --alsologtostderr -v=8: (45.749295358s)
functional_test.go:656: soft start took 45.749755305s for "functional-123219" cluster.
--- PASS: TestFunctional/serial/SoftStart (45.75s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-123219 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (8.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-123219 cache add k8s.gcr.io/pause:3.1: (2.937821697s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-123219 cache add k8s.gcr.io/pause:3.3: (2.727945864s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-123219 cache add k8s.gcr.io/pause:latest: (2.415460306s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (8.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-123219 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local927879643/001
functional_test.go:1082: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 cache add minikube-local-cache-test:functional-123219
functional_test.go:1087: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 cache delete minikube-local-cache-test:functional-123219
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-123219
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123219 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (133.362842ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 cache reload
functional_test.go:1151: (dbg) Done: out/minikube-darwin-amd64 -p functional-123219 cache reload: (1.383596689s)
functional_test.go:1156: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
E0108 12:34:48.627480    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
E0108 12:34:48.633754    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
E0108 12:34:48.644127    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
E0108 12:34:48.664304    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
E0108 12:34:48.704515    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
E0108 12:34:48.785173    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 kubectl -- --context functional-123219 get pods
E0108 12:34:48.946129    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
E0108 12:34:49.266252    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.49s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.68s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-123219 get pods
E0108 12:34:49.907574    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.68s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (51.36s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-123219 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0108 12:34:51.188826    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
E0108 12:34:53.749653    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
E0108 12:34:58.870278    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
E0108 12:35:09.111583    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
E0108 12:35:29.591900    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
functional_test.go:750: (dbg) Done: out/minikube-darwin-amd64 start -p functional-123219 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (51.363963141s)
functional_test.go:754: restart took 51.364146084s for "functional-123219" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (51.36s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-123219 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 logs
functional_test.go:1229: (dbg) Done: out/minikube-darwin-amd64 -p functional-123219 logs: (2.606409677s)
--- PASS: TestFunctional/serial/LogsCmd (2.61s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd4085072930/001/logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-darwin-amd64 -p functional-123219 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd4085072930/001/logs.txt: (2.673976223s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.68s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123219 config get cpus: exit status 14 (66.600632ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 config set cpus 2
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 config unset cpus
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123219 config get cpus: exit status 14 (58.415247ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-123219 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-123219 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 5860: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.32s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-123219 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:967: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-123219 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (451.778554ms)

                                                
                                                
-- stdout --
	* [functional-123219] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3013/.minikube
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 12:36:44.269008    5829 out.go:296] Setting OutFile to fd 1 ...
	I0108 12:36:44.269186    5829 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:36:44.269192    5829 out.go:309] Setting ErrFile to fd 2...
	I0108 12:36:44.269196    5829 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:36:44.269307    5829 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3013/.minikube/bin
	I0108 12:36:44.269806    5829 out.go:303] Setting JSON to false
	I0108 12:36:44.289108    5829 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2178,"bootTime":1673208026,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0108 12:36:44.289219    5829 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0108 12:36:44.311395    5829 out.go:177] * [functional-123219] minikube v1.28.0 on Darwin 13.0.1
	I0108 12:36:44.333392    5829 notify.go:220] Checking for updates...
	I0108 12:36:44.333411    5829 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 12:36:44.354981    5829 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3013/kubeconfig
	I0108 12:36:44.399144    5829 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 12:36:44.443152    5829 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 12:36:44.464911    5829 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3013/.minikube
	I0108 12:36:44.486707    5829 config.go:180] Loaded profile config "functional-123219": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 12:36:44.487395    5829 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 12:36:44.487475    5829 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 12:36:44.495328    5829 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:50395
	I0108 12:36:44.495725    5829 main.go:134] libmachine: () Calling .GetVersion
	I0108 12:36:44.496174    5829 main.go:134] libmachine: Using API Version  1
	I0108 12:36:44.496187    5829 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 12:36:44.496388    5829 main.go:134] libmachine: () Calling .GetMachineName
	I0108 12:36:44.496503    5829 main.go:134] libmachine: (functional-123219) Calling .DriverName
	I0108 12:36:44.496625    5829 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 12:36:44.496902    5829 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 12:36:44.496922    5829 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 12:36:44.503818    5829 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:50397
	I0108 12:36:44.504176    5829 main.go:134] libmachine: () Calling .GetVersion
	I0108 12:36:44.504473    5829 main.go:134] libmachine: Using API Version  1
	I0108 12:36:44.504484    5829 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 12:36:44.504676    5829 main.go:134] libmachine: () Calling .GetMachineName
	I0108 12:36:44.504779    5829 main.go:134] libmachine: (functional-123219) Calling .DriverName
	I0108 12:36:44.532010    5829 out.go:177] * Using the hyperkit driver based on existing profile
	I0108 12:36:44.553074    5829 start.go:294] selected driver: hyperkit
	I0108 12:36:44.553105    5829 start.go:838] validating driver "hyperkit" against &{Name:functional-123219 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.28.0-1673190013-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-123219 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.64.4 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-serv
er:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 12:36:44.553337    5829 start.go:849] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 12:36:44.577791    5829 out.go:177] 
	W0108 12:36:44.599014    5829 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0108 12:36:44.620211    5829 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-123219 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-123219 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-123219 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (492.928647ms)

                                                
                                                
-- stdout --
	* [functional-123219] minikube v1.28.0 sur Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3013/.minikube
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 12:36:45.169820    5845 out.go:296] Setting OutFile to fd 1 ...
	I0108 12:36:45.170011    5845 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:36:45.170017    5845 out.go:309] Setting ErrFile to fd 2...
	I0108 12:36:45.170021    5845 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:36:45.170150    5845 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3013/.minikube/bin
	I0108 12:36:45.170595    5845 out.go:303] Setting JSON to false
	I0108 12:36:45.189937    5845 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2179,"bootTime":1673208026,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0108 12:36:45.190059    5845 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0108 12:36:45.211468    5845 out.go:177] * [functional-123219] minikube v1.28.0 sur Darwin 13.0.1
	I0108 12:36:45.254586    5845 notify.go:220] Checking for updates...
	I0108 12:36:45.276082    5845 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 12:36:45.297753    5845 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3013/kubeconfig
	I0108 12:36:45.319628    5845 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 12:36:45.341319    5845 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 12:36:45.362505    5845 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3013/.minikube
	I0108 12:36:45.383616    5845 config.go:180] Loaded profile config "functional-123219": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 12:36:45.383969    5845 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 12:36:45.384014    5845 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 12:36:45.390880    5845 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:50409
	I0108 12:36:45.391265    5845 main.go:134] libmachine: () Calling .GetVersion
	I0108 12:36:45.391748    5845 main.go:134] libmachine: Using API Version  1
	I0108 12:36:45.391762    5845 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 12:36:45.392013    5845 main.go:134] libmachine: () Calling .GetMachineName
	I0108 12:36:45.392100    5845 main.go:134] libmachine: (functional-123219) Calling .DriverName
	I0108 12:36:45.392216    5845 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 12:36:45.392478    5845 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 12:36:45.392500    5845 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 12:36:45.399222    5845 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:50411
	I0108 12:36:45.399600    5845 main.go:134] libmachine: () Calling .GetVersion
	I0108 12:36:45.399962    5845 main.go:134] libmachine: Using API Version  1
	I0108 12:36:45.399976    5845 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 12:36:45.400183    5845 main.go:134] libmachine: () Calling .GetMachineName
	I0108 12:36:45.400272    5845 main.go:134] libmachine: (functional-123219) Calling .DriverName
	I0108 12:36:45.444158    5845 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0108 12:36:45.486580    5845 start.go:294] selected driver: hyperkit
	I0108 12:36:45.486629    5845 start.go:838] validating driver "hyperkit" against &{Name:functional-123219 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.28.0-1673190013-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-123219 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.64.4 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-serv
er:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 12:36:45.486825    5845 start.go:849] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 12:36:45.511106    5845 out.go:177] 
	W0108 12:36:45.532400    5845 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0108 12:36:45.553327    5845 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 status
functional_test.go:853: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:865: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (13.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-123219 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-123219 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-bntgh" [f06f3627-1a55-49b1-8241-0846eead36d1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-bntgh" [f06f3627-1a55-49b1-8241-0846eead36d1] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 12.009231011s
functional_test.go:1449: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 service list
functional_test.go:1463: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 service --namespace=default --https --url hello-node
functional_test.go:1476: found endpoint: https://192.168.64.4:30705
functional_test.go:1491: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 service hello-node --url --format={{.IP}}
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1511: found endpoint for hello-node: http://192.168.64.4:30705
--- PASS: TestFunctional/parallel/ServiceCmd (13.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-123219 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-123219 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-6458c8fb6f-t9tvz" [1eb945fa-8827-4615-bd89-af95f150b660] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-6458c8fb6f-t9tvz" [1eb945fa-8827-4615-bd89-af95f150b660] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.008265725s
functional_test.go:1579: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 service hello-node-connect --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1585: found endpoint for hello-node-connect: http://192.168.64.4:32159
functional_test.go:1605: http://192.168.64.4:32159: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6458c8fb6f-t9tvz

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=172.17.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.64.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.64.4:32159
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1632: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [18748510-e9c8-4faa-9436-1fd7b7f8ffb6] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00793552s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-123219 get storageclass -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-123219 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-123219 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-123219 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-123219 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [f039fb6f-b0fa-4dcf-a7ed-95d593983e99] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [f039fb6f-b0fa-4dcf-a7ed-95d593983e99] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [f039fb6f-b0fa-4dcf-a7ed-95d593983e99] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.010109568s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-123219 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-123219 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-123219 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [20b80bf2-e44e-4167-98f4-27ede6f67cfe] Pending
helpers_test.go:342: "sp-pod" [20b80bf2-e44e-4167-98f4-27ede6f67cfe] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [20b80bf2-e44e-4167-98f4-27ede6f67cfe] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.00751669s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-123219 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.65s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh "echo hello"
functional_test.go:1672: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh -n functional-123219 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 cp functional-123219:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd520762557/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh -n functional-123219 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-123219 replace --force -f testdata/mysql.yaml
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-k75d8" [60cc44ee-e628-4c82-89a9-63d0ae64ec65] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-k75d8" [60cc44ee-e628-4c82-89a9-63d0ae64ec65] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.015792364s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-123219 exec mysql-596b7fcdbf-k75d8 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-123219 exec mysql-596b7fcdbf-k75d8 -- mysql -ppassword -e "show databases;": exit status 1 (208.529564ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0108 12:36:10.552316    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
functional_test.go:1734: (dbg) Run:  kubectl --context functional-123219 exec mysql-596b7fcdbf-k75d8 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-123219 exec mysql-596b7fcdbf-k75d8 -- mysql -ppassword -e "show databases;": exit status 1 (186.796236ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-123219 exec mysql-596b7fcdbf-k75d8 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-123219 exec mysql-596b7fcdbf-k75d8 -- mysql -ppassword -e "show databases;": exit status 1 (131.165842ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-123219 exec mysql-596b7fcdbf-k75d8 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.21s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/4201/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh "sudo cat /etc/test/nested/copy/4201/hosts"
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/4201.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh "sudo cat /etc/ssl/certs/4201.pem"
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/4201.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh "sudo cat /usr/share/ca-certificates/4201.pem"
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/42012.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh "sudo cat /etc/ssl/certs/42012.pem"
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/42012.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh "sudo cat /usr/share/ca-certificates/42012.pem"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-123219 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh "sudo systemctl is-active crio"
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123219 ssh "sudo systemctl is-active crio": exit status 1 (120.655408ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 image ls --format short
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-123219 image ls --format short:
registry.k8s.io/pause:3.8
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-123219
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-123219
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 image ls --format table
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-123219 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.4-0           | a8a176a5d5d69 | 300MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/minikube-local-cache-test | functional-123219 | 66e2fa3fc6b80 | 30B    |
| docker.io/library/nginx                     | alpine            | 1e415454686a6 | 40.7MB |
| registry.k8s.io/kube-controller-manager     | v1.25.3           | 6039992312758 | 117MB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | latest            | 1403e55ab369c | 142MB  |
| registry.k8s.io/kube-apiserver              | v1.25.3           | 0346dbd74bcb9 | 128MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-proxy                  | v1.25.3           | beaaf00edd38a | 61.7MB |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | 3.8               | 4873874c08efc | 711kB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| k8s.gcr.io/pause                            | 3.6               | 6270bb605e12e | 683kB  |
| gcr.io/google-containers/addon-resizer      | functional-123219 | ffd4cfbbe753e | 32.9MB |
| docker.io/localhost/my-image                | functional-123219 | bba95acc4d7c9 | 1.24MB |
| docker.io/library/mysql                     | 5.7               | d410f4167eea9 | 495MB  |
| registry.k8s.io/kube-scheduler              | v1.25.3           | 6d23ec0e8b87e | 50.6MB |
|---------------------------------------------|-------------------|---------------|--------|
2023/01/08 12:36:53 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 image ls --format json
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-123219 image ls --format json:
[{"id":"1403e55ab369cd1c8039c34e6b4d47ca40bbde39c371254c7cba14756f472f52","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"bba95acc4d7c970ab9f34d6524e339098a85414a8964512847bfcd0f875a4bac","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-123219"],"size":"1240000"},{"id":"66e2fa3fc6b80e6eed496626ea6173c70ebdac4bf25c2054d321eac2ecce0f0c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-123219"],"size":"30"},{"id":"60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.25.3"],"size":"117000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"4873874c08e
fc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.8"],"size":"711000"},{"id":"a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.4-0"],"size":"300000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-123219"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.25.3"],"size":"50600000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/
k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"1e415454686a67ed83fb7aaa41acb2472e7457061bcdbbf0f5143d7a1a89b36c","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"d410f4167eea912908b2f9bcc24eff870cb3c131dfb755088b79a4188bfeb40f","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"495000000"},{"id":"0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.25.3"],"size":"128000000"},{"id":"beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.25.3"],"size":"61700000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"beae173ccac6ad749f7
6713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 image ls --format yaml
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-123219 image ls --format yaml:
- id: 66e2fa3fc6b80e6eed496626ea6173c70ebdac4bf25c2054d321eac2ecce0f0c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-123219
size: "30"
- id: 1403e55ab369cd1c8039c34e6b4d47ca40bbde39c371254c7cba14756f472f52
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.25.3
size: "128000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 1e415454686a67ed83fb7aaa41acb2472e7457061bcdbbf0f5143d7a1a89b36c
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.25.3
size: "61700000"
- id: a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.4-0
size: "300000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.25.3
size: "117000000"
- id: 6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.25.3
size: "50600000"
- id: 4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.8
size: "711000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: d410f4167eea912908b2f9bcc24eff870cb3c131dfb755088b79a4188bfeb40f
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "495000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-123219
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh pgrep buildkitd
functional_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123219 ssh pgrep buildkitd: exit status 1 (124.187578ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 image build -t localhost/my-image:functional-123219 testdata/build
functional_test.go:311: (dbg) Done: out/minikube-darwin-amd64 -p functional-123219 image build -t localhost/my-image:functional-123219 testdata/build: (3.671160178s)
functional_test.go:316: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-123219 image build -t localhost/my-image:functional-123219 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 399f8f48abac
Removing intermediate container 399f8f48abac
---> cc4cc2ab944c
Step 3/3 : ADD content.txt /
---> bba95acc4d7c
Successfully built bba95acc4d7c
Successfully tagged localhost/my-image:functional-123219
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.076959644s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-123219
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.14s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-123219 docker-env) && out/minikube-darwin-amd64 status -p functional-123219"
functional_test.go:515: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-123219 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 image load --daemon gcr.io/google-containers/addon-resizer:functional-123219

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-darwin-amd64 -p functional-123219 image load --daemon gcr.io/google-containers/addon-resizer:functional-123219: (2.895012354s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 image load --daemon gcr.io/google-containers/addon-resizer:functional-123219
functional_test.go:361: (dbg) Done: out/minikube-darwin-amd64 -p functional-123219 image load --daemon gcr.io/google-containers/addon-resizer:functional-123219: (1.9075148s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.662075875s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-123219
functional_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 image load --daemon gcr.io/google-containers/addon-resizer:functional-123219
functional_test.go:241: (dbg) Done: out/minikube-darwin-amd64 -p functional-123219 image load --daemon gcr.io/google-containers/addon-resizer:functional-123219: (3.265154573s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 image save gcr.io/google-containers/addon-resizer:functional-123219 /Users/jenkins/workspace/addon-resizer-save.tar
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 image rm gcr.io/google-containers/addon-resizer:functional-123219
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 image load /Users/jenkins/workspace/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Done: out/minikube-darwin-amd64 -p functional-123219 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.290352215s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-123219
functional_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 image save --daemon gcr.io/google-containers/addon-resizer:functional-123219
functional_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p functional-123219 image save --daemon gcr.io/google-containers/addon-resizer:functional-123219: (1.901463662s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-123219
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-123219 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-123219 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [932ae6eb-0e89-466b-9e59-351390e3e5bb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [932ae6eb-0e89-466b-9e59-351390e3e5bb] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.007012699s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-123219 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.107.37.142 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:254: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:262: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:286: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:294: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:359: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-123219 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "201.808796ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "86.79532ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "206.193906ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "81.396946ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-123219 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3967743870/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1673210194407467000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3967743870/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1673210194407467000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3967743870/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1673210194407467000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3967743870/001/test-1673210194407467000
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123219 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (122.99293ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh -- ls -la /mount-9p
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan  8 20:36 created-by-test
-rw-r--r-- 1 docker docker 24 Jan  8 20:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan  8 20:36 test-1673210194407467000
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh cat /mount-9p/test-1673210194407467000
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-123219 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [b8d1f385-ea16-4f53-8b57-b0f4a3c6cbfd] Pending
helpers_test.go:342: "busybox-mount" [b8d1f385-ea16-4f53-8b57-b0f4a3c6cbfd] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [b8d1f385-ea16-4f53-8b57-b0f4a3c6cbfd] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [b8d1f385-ea16-4f53-8b57-b0f4a3c6cbfd] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.00710543s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-123219 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-123219 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3967743870/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-123219 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port717764457/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123219 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (155.892107ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-123219 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port717764457/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123219 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123219 ssh "sudo umount -f /mount-9p": exit status 1 (118.593743ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-darwin-amd64 -p functional-123219 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-123219 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port717764457/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.72s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-123219
--- PASS: TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-123219
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-123219
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (106.7s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-123659 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit 
E0108 12:37:32.472942    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-amd64 start -p ingress-addon-legacy-123659 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit : (1m46.699683107s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (106.70s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.75s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-123659 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-123659 addons enable ingress --alsologtostderr -v=5: (13.749022231s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.75s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-123659 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (37.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:169: (dbg) Run:  kubectl --context ingress-addon-legacy-123659 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:169: (dbg) Done: kubectl --context ingress-addon-legacy-123659 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.652352209s)
addons_test.go:189: (dbg) Run:  kubectl --context ingress-addon-legacy-123659 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:202: (dbg) Run:  kubectl --context ingress-addon-legacy-123659 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:207: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [67f15ead-3e1c-4867-92d4-88a440ce9265] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [67f15ead-3e1c-4867-92d4-88a440ce9265] Running
addons_test.go:207: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.006958124s
addons_test.go:219: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-123659 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:243: (dbg) Run:  kubectl --context ingress-addon-legacy-123659 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-123659 ip
addons_test.go:254: (dbg) Run:  nslookup hello-john.test 192.168.64.5
addons_test.go:263: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-123659 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:263: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-123659 addons disable ingress-dns --alsologtostderr -v=1: (1.75203615s)
addons_test.go:268: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-123659 addons disable ingress --alsologtostderr -v=1
addons_test.go:268: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-123659 addons disable ingress --alsologtostderr -v=1: (7.233006149s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (37.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (54.58s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-123940 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E0108 12:39:48.629468    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
E0108 12:40:16.315593    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-123940 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (54.577072755s)
--- PASS: TestJSONOutput/start/Command (54.58s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.9s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-123940 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.90s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.45s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-123940 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.45s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.17s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-123940 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-123940 --output=json --user=testUser: (8.173455589s)
--- PASS: TestJSONOutput/stop/Command (8.17s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.76s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-124044 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-124044 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (359.010816ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5c54d5a8-740f-4d19-839f-31fa86b7fdab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-124044] minikube v1.28.0 on Darwin 13.0.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c233e153-2a6a-4a53-9a88-11fe80234ea4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15565"}}
	{"specversion":"1.0","id":"aa22aa2a-d9b5-4665-b0d8-2ae900a9e3b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15565-3013/kubeconfig"}}
	{"specversion":"1.0","id":"28e200fb-13f1-4a5f-ad6c-7d3bc056f934","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"0ae33fe2-c07c-45d8-9c09-b53713a994ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"88042708-2635-4efe-98c2-5aa136668a98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3013/.minikube"}}
	{"specversion":"1.0","id":"23134cac-61e2-4a87-b56e-79019d2601b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-124044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-124044
--- PASS: TestErrorJSONOutput (0.76s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (93.96s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-124045 --driver=hyperkit 
E0108 12:40:50.375303    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
E0108 12:40:50.380472    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
E0108 12:40:50.391173    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
E0108 12:40:50.413224    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
E0108 12:40:50.454747    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
E0108 12:40:50.536816    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
E0108 12:40:50.696947    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
E0108 12:40:51.017113    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
E0108 12:40:51.657534    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
E0108 12:40:52.937717    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
E0108 12:40:55.499302    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
E0108 12:41:00.620035    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
E0108 12:41:10.862225    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-124045 --driver=hyperkit : (45.009938931s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-124045 --driver=hyperkit 
E0108 12:41:31.343555    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-124045 --driver=hyperkit : (39.369073762s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-124045
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-124045
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-124045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-124045
E0108 12:42:12.305283    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-124045: (3.350067833s)
helpers_test.go:175: Cleaning up "first-124045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-124045
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-124045: (5.270729914s)
--- PASS: TestMinikubeProfile (93.96s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (15.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-124219 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-124219 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : (14.011538098s)
--- PASS: TestMountStart/serial/StartWithMountFirst (15.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-124219 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-124219 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (14.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-124219 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-124219 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit : (13.587731149s)
--- PASS: TestMountStart/serial/StartWithMountSecond (14.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-124219 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-124219 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.4s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-124219 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-124219 --alsologtostderr -v=5: (2.401795574s)
--- PASS: TestMountStart/serial/DeleteFirst (2.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-124219 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-124219 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-124219
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-124219: (2.217673001s)
--- PASS: TestMountStart/serial/Stop (2.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (15.96s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-124219
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-124219: (14.960691248s)
--- PASS: TestMountStart/serial/RestartStopped (15.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-124219 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-124219 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (126.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-124313 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0108 12:43:34.226823    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
E0108 12:44:00.479506    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
E0108 12:44:00.484701    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
E0108 12:44:00.495996    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
E0108 12:44:00.516872    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
E0108 12:44:00.557773    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
E0108 12:44:00.638432    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
E0108 12:44:00.799478    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
E0108 12:44:01.121627    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
E0108 12:44:01.763152    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
E0108 12:44:03.044537    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
E0108 12:44:05.605362    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
E0108 12:44:10.727131    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
E0108 12:44:20.967872    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
E0108 12:44:41.449280    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
E0108 12:44:48.630255    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-124313 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (2m6.61358627s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (126.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124313 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124313 -- rollout status deployment/busybox
E0108 12:45:22.411331    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-124313 -- rollout status deployment/busybox: (3.726084694s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124313 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124313 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124313 -- exec busybox-65db55d5d6-8jcw8 -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124313 -- exec busybox-65db55d5d6-hj2mr -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124313 -- exec busybox-65db55d5d6-8jcw8 -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124313 -- exec busybox-65db55d5d6-hj2mr -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124313 -- exec busybox-65db55d5d6-8jcw8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124313 -- exec busybox-65db55d5d6-hj2mr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.50s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124313 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124313 -- exec busybox-65db55d5d6-8jcw8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124313 -- exec busybox-65db55d5d6-8jcw8 -- sh -c "ping -c 1 192.168.64.1"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124313 -- exec busybox-65db55d5d6-hj2mr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124313 -- exec busybox-65db55d5d6-hj2mr -- sh -c "ping -c 1 192.168.64.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-124313 -v 3 --alsologtostderr
E0108 12:45:50.376468    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-124313 -v 3 --alsologtostderr: (43.395328738s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.71s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 cp testdata/cp-test.txt multinode-124313:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 ssh -n multinode-124313 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 cp multinode-124313:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile3779971516/001/cp-test_multinode-124313.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 ssh -n multinode-124313 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 cp multinode-124313:/home/docker/cp-test.txt multinode-124313-m02:/home/docker/cp-test_multinode-124313_multinode-124313-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 ssh -n multinode-124313 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 ssh -n multinode-124313-m02 "sudo cat /home/docker/cp-test_multinode-124313_multinode-124313-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 cp multinode-124313:/home/docker/cp-test.txt multinode-124313-m03:/home/docker/cp-test_multinode-124313_multinode-124313-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 ssh -n multinode-124313 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 ssh -n multinode-124313-m03 "sudo cat /home/docker/cp-test_multinode-124313_multinode-124313-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 cp testdata/cp-test.txt multinode-124313-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 ssh -n multinode-124313-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 cp multinode-124313-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile3779971516/001/cp-test_multinode-124313-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 ssh -n multinode-124313-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 cp multinode-124313-m02:/home/docker/cp-test.txt multinode-124313:/home/docker/cp-test_multinode-124313-m02_multinode-124313.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 ssh -n multinode-124313-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 ssh -n multinode-124313 "sudo cat /home/docker/cp-test_multinode-124313-m02_multinode-124313.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 cp multinode-124313-m02:/home/docker/cp-test.txt multinode-124313-m03:/home/docker/cp-test_multinode-124313-m02_multinode-124313-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 ssh -n multinode-124313-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 ssh -n multinode-124313-m03 "sudo cat /home/docker/cp-test_multinode-124313-m02_multinode-124313-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 cp testdata/cp-test.txt multinode-124313-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 ssh -n multinode-124313-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 cp multinode-124313-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile3779971516/001/cp-test_multinode-124313-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 ssh -n multinode-124313-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 cp multinode-124313-m03:/home/docker/cp-test.txt multinode-124313:/home/docker/cp-test_multinode-124313-m03_multinode-124313.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 ssh -n multinode-124313-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 ssh -n multinode-124313 "sudo cat /home/docker/cp-test_multinode-124313-m03_multinode-124313.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 cp multinode-124313-m03:/home/docker/cp-test.txt multinode-124313-m02:/home/docker/cp-test_multinode-124313-m03_multinode-124313-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 ssh -n multinode-124313-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 ssh -n multinode-124313-m02 "sudo cat /home/docker/cp-test_multinode-124313-m03_multinode-124313-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.41s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 node stop m03
E0108 12:46:18.068065    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-124313 node stop m03: (2.189356131s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-124313 status: exit status 7 (243.865833ms)

                                                
                                                
-- stdout --
	multinode-124313
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-124313-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-124313-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-124313 status --alsologtostderr: exit status 7 (240.267172ms)

                                                
                                                
-- stdout --
	multinode-124313
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-124313-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-124313-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 12:46:18.980357    7118 out.go:296] Setting OutFile to fd 1 ...
	I0108 12:46:18.980648    7118 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:46:18.980653    7118 out.go:309] Setting ErrFile to fd 2...
	I0108 12:46:18.980657    7118 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:46:18.980776    7118 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3013/.minikube/bin
	I0108 12:46:18.980976    7118 out.go:303] Setting JSON to false
	I0108 12:46:18.981002    7118 mustload.go:65] Loading cluster: multinode-124313
	I0108 12:46:18.981043    7118 notify.go:220] Checking for updates...
	I0108 12:46:18.981311    7118 config.go:180] Loaded profile config "multinode-124313": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 12:46:18.981325    7118 status.go:255] checking status of multinode-124313 ...
	I0108 12:46:18.981695    7118 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 12:46:18.981750    7118 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 12:46:18.988532    7118 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51395
	I0108 12:46:18.988910    7118 main.go:134] libmachine: () Calling .GetVersion
	I0108 12:46:18.989318    7118 main.go:134] libmachine: Using API Version  1
	I0108 12:46:18.989330    7118 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 12:46:18.989518    7118 main.go:134] libmachine: () Calling .GetMachineName
	I0108 12:46:18.989625    7118 main.go:134] libmachine: (multinode-124313) Calling .GetState
	I0108 12:46:18.989711    7118 main.go:134] libmachine: (multinode-124313) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 12:46:18.989788    7118 main.go:134] libmachine: (multinode-124313) DBG | hyperkit pid from json: 6658
	I0108 12:46:18.990968    7118 status.go:330] multinode-124313 host status = "Running" (err=<nil>)
	I0108 12:46:18.990983    7118 host.go:66] Checking if "multinode-124313" exists ...
	I0108 12:46:18.991280    7118 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 12:46:18.991303    7118 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 12:46:18.998164    7118 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51397
	I0108 12:46:18.998539    7118 main.go:134] libmachine: () Calling .GetVersion
	I0108 12:46:18.998886    7118 main.go:134] libmachine: Using API Version  1
	I0108 12:46:18.998903    7118 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 12:46:18.999086    7118 main.go:134] libmachine: () Calling .GetMachineName
	I0108 12:46:18.999172    7118 main.go:134] libmachine: (multinode-124313) Calling .GetIP
	I0108 12:46:18.999247    7118 host.go:66] Checking if "multinode-124313" exists ...
	I0108 12:46:18.999518    7118 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 12:46:18.999550    7118 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 12:46:19.006018    7118 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51399
	I0108 12:46:19.006401    7118 main.go:134] libmachine: () Calling .GetVersion
	I0108 12:46:19.006702    7118 main.go:134] libmachine: Using API Version  1
	I0108 12:46:19.006713    7118 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 12:46:19.006921    7118 main.go:134] libmachine: () Calling .GetMachineName
	I0108 12:46:19.007021    7118 main.go:134] libmachine: (multinode-124313) Calling .DriverName
	I0108 12:46:19.007162    7118 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 12:46:19.007184    7118 main.go:134] libmachine: (multinode-124313) Calling .GetSSHHostname
	I0108 12:46:19.007273    7118 main.go:134] libmachine: (multinode-124313) Calling .GetSSHPort
	I0108 12:46:19.007358    7118 main.go:134] libmachine: (multinode-124313) Calling .GetSSHKeyPath
	I0108 12:46:19.007446    7118 main.go:134] libmachine: (multinode-124313) Calling .GetSSHUsername
	I0108 12:46:19.007531    7118 sshutil.go:53] new ssh client: &{IP:192.168.64.11 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/multinode-124313/id_rsa Username:docker}
	I0108 12:46:19.040846    7118 ssh_runner.go:195] Run: systemctl --version
	I0108 12:46:19.044524    7118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 12:46:19.053416    7118 kubeconfig.go:92] found "multinode-124313" server: "https://192.168.64.11:8443"
	I0108 12:46:19.053434    7118 api_server.go:165] Checking apiserver status ...
	I0108 12:46:19.053472    7118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 12:46:19.061739    7118 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1745/cgroup
	I0108 12:46:19.067421    7118 api_server.go:181] apiserver freezer: "9:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4812cc099e18e56bfa17b68edc8b22c2.slice/docker-e60b53e674f4ad1f7023397c35f4a73c7fb515204dfa554d8992c51df528fd8b.scope"
	I0108 12:46:19.067462    7118 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4812cc099e18e56bfa17b68edc8b22c2.slice/docker-e60b53e674f4ad1f7023397c35f4a73c7fb515204dfa554d8992c51df528fd8b.scope/freezer.state
	I0108 12:46:19.073294    7118 api_server.go:203] freezer state: "THAWED"
	I0108 12:46:19.073307    7118 api_server.go:252] Checking apiserver healthz at https://192.168.64.11:8443/healthz ...
	I0108 12:46:19.077455    7118 api_server.go:278] https://192.168.64.11:8443/healthz returned 200:
	ok
	I0108 12:46:19.077469    7118 status.go:421] multinode-124313 apiserver status = Running (err=<nil>)
	I0108 12:46:19.077478    7118 status.go:257] multinode-124313 status: &{Name:multinode-124313 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 12:46:19.077491    7118 status.go:255] checking status of multinode-124313-m02 ...
	I0108 12:46:19.077750    7118 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 12:46:19.077778    7118 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 12:46:19.084627    7118 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51403
	I0108 12:46:19.084990    7118 main.go:134] libmachine: () Calling .GetVersion
	I0108 12:46:19.085336    7118 main.go:134] libmachine: Using API Version  1
	I0108 12:46:19.085350    7118 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 12:46:19.085534    7118 main.go:134] libmachine: () Calling .GetMachineName
	I0108 12:46:19.085626    7118 main.go:134] libmachine: (multinode-124313-m02) Calling .GetState
	I0108 12:46:19.085708    7118 main.go:134] libmachine: (multinode-124313-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 12:46:19.085776    7118 main.go:134] libmachine: (multinode-124313-m02) DBG | hyperkit pid from json: 6741
	I0108 12:46:19.086967    7118 status.go:330] multinode-124313-m02 host status = "Running" (err=<nil>)
	I0108 12:46:19.086978    7118 host.go:66] Checking if "multinode-124313-m02" exists ...
	I0108 12:46:19.087264    7118 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 12:46:19.087286    7118 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 12:46:19.094180    7118 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51405
	I0108 12:46:19.094577    7118 main.go:134] libmachine: () Calling .GetVersion
	I0108 12:46:19.094953    7118 main.go:134] libmachine: Using API Version  1
	I0108 12:46:19.094971    7118 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 12:46:19.095178    7118 main.go:134] libmachine: () Calling .GetMachineName
	I0108 12:46:19.095283    7118 main.go:134] libmachine: (multinode-124313-m02) Calling .GetIP
	I0108 12:46:19.095367    7118 host.go:66] Checking if "multinode-124313-m02" exists ...
	I0108 12:46:19.095645    7118 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 12:46:19.095666    7118 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 12:46:19.102254    7118 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51407
	I0108 12:46:19.102604    7118 main.go:134] libmachine: () Calling .GetVersion
	I0108 12:46:19.102907    7118 main.go:134] libmachine: Using API Version  1
	I0108 12:46:19.102919    7118 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 12:46:19.103095    7118 main.go:134] libmachine: () Calling .GetMachineName
	I0108 12:46:19.103211    7118 main.go:134] libmachine: (multinode-124313-m02) Calling .DriverName
	I0108 12:46:19.103341    7118 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 12:46:19.103353    7118 main.go:134] libmachine: (multinode-124313-m02) Calling .GetSSHHostname
	I0108 12:46:19.103431    7118 main.go:134] libmachine: (multinode-124313-m02) Calling .GetSSHPort
	I0108 12:46:19.103517    7118 main.go:134] libmachine: (multinode-124313-m02) Calling .GetSSHKeyPath
	I0108 12:46:19.103584    7118 main.go:134] libmachine: (multinode-124313-m02) Calling .GetSSHUsername
	I0108 12:46:19.103658    7118 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/multinode-124313-m02/id_rsa Username:docker}
	I0108 12:46:19.145704    7118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 12:46:19.154508    7118 status.go:257] multinode-124313-m02 status: &{Name:multinode-124313-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0108 12:46:19.154531    7118 status.go:255] checking status of multinode-124313-m03 ...
	I0108 12:46:19.154831    7118 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 12:46:19.154854    7118 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 12:46:19.161688    7118 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51410
	I0108 12:46:19.162085    7118 main.go:134] libmachine: () Calling .GetVersion
	I0108 12:46:19.162409    7118 main.go:134] libmachine: Using API Version  1
	I0108 12:46:19.162424    7118 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 12:46:19.162620    7118 main.go:134] libmachine: () Calling .GetMachineName
	I0108 12:46:19.162715    7118 main.go:134] libmachine: (multinode-124313-m03) Calling .GetState
	I0108 12:46:19.162789    7118 main.go:134] libmachine: (multinode-124313-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 12:46:19.162860    7118 main.go:134] libmachine: (multinode-124313-m03) DBG | hyperkit pid from json: 6872
	I0108 12:46:19.163985    7118 main.go:134] libmachine: (multinode-124313-m03) DBG | hyperkit pid 6872 missing from process table
	I0108 12:46:19.164005    7118 status.go:330] multinode-124313-m03 host status = "Stopped" (err=<nil>)
	I0108 12:46:19.164011    7118 status.go:343] host is not running, skipping remaining checks
	I0108 12:46:19.164017    7118 status.go:257] multinode-124313-m03 status: &{Name:multinode-124313-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.67s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (30.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 node start m03 --alsologtostderr
E0108 12:46:44.332159    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-124313 node start m03 --alsologtostderr: (30.552159201s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (30.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (838.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-124313
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-124313
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-124313: (12.414672104s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-124313 --wait=true -v=8 --alsologtostderr
E0108 12:49:00.477544    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
E0108 12:49:28.169806    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
E0108 12:49:48.625986    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
E0108 12:50:50.371886    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
E0108 12:51:11.673308    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
E0108 12:54:00.475863    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
E0108 12:54:48.627694    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
E0108 12:55:50.372238    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
E0108 12:57:13.424614    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
E0108 12:59:00.477484    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
E0108 12:59:48.625975    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
E0108 13:00:23.531289    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-124313 --wait=true -v=8 --alsologtostderr: (13m45.947980509s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-124313
--- PASS: TestMultiNode/serial/RestartKeepsNodes (838.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 node delete m03
E0108 13:00:50.371079    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-124313 node delete m03: (4.612783787s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 status --alsologtostderr
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-124313 stop: (3.346560494s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-124313 status: exit status 7 (74.805309ms)

                                                
                                                
-- stdout --
	multinode-124313
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-124313-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-124313 status --alsologtostderr: exit status 7 (75.285049ms)

                                                
                                                
-- stdout --
	multinode-124313
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-124313-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 13:00:56.961232    8112 out.go:296] Setting OutFile to fd 1 ...
	I0108 13:00:56.961418    8112 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 13:00:56.961423    8112 out.go:309] Setting ErrFile to fd 2...
	I0108 13:00:56.961428    8112 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 13:00:56.961539    8112 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3013/.minikube/bin
	I0108 13:00:56.961749    8112 out.go:303] Setting JSON to false
	I0108 13:00:56.961771    8112 mustload.go:65] Loading cluster: multinode-124313
	I0108 13:00:56.961800    8112 notify.go:220] Checking for updates...
	I0108 13:00:56.962088    8112 config.go:180] Loaded profile config "multinode-124313": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 13:00:56.962104    8112 status.go:255] checking status of multinode-124313 ...
	I0108 13:00:56.962447    8112 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:00:56.962488    8112 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 13:00:56.969369    8112 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51627
	I0108 13:00:56.969691    8112 main.go:134] libmachine: () Calling .GetVersion
	I0108 13:00:56.970118    8112 main.go:134] libmachine: Using API Version  1
	I0108 13:00:56.970132    8112 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 13:00:56.970345    8112 main.go:134] libmachine: () Calling .GetMachineName
	I0108 13:00:56.970437    8112 main.go:134] libmachine: (multinode-124313) Calling .GetState
	I0108 13:00:56.970526    8112 main.go:134] libmachine: (multinode-124313) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 13:00:56.970591    8112 main.go:134] libmachine: (multinode-124313) DBG | hyperkit pid from json: 7222
	I0108 13:00:56.971486    8112 main.go:134] libmachine: (multinode-124313) DBG | hyperkit pid 7222 missing from process table
	I0108 13:00:56.971513    8112 status.go:330] multinode-124313 host status = "Stopped" (err=<nil>)
	I0108 13:00:56.971520    8112 status.go:343] host is not running, skipping remaining checks
	I0108 13:00:56.971526    8112 status.go:257] multinode-124313 status: &{Name:multinode-124313 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 13:00:56.971543    8112 status.go:255] checking status of multinode-124313-m02 ...
	I0108 13:00:56.971797    8112 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0108 13:00:56.971816    8112 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0108 13:00:56.978529    8112 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51629
	I0108 13:00:56.978862    8112 main.go:134] libmachine: () Calling .GetVersion
	I0108 13:00:56.979191    8112 main.go:134] libmachine: Using API Version  1
	I0108 13:00:56.979204    8112 main.go:134] libmachine: () Calling .SetConfigRaw
	I0108 13:00:56.979432    8112 main.go:134] libmachine: () Calling .GetMachineName
	I0108 13:00:56.979539    8112 main.go:134] libmachine: (multinode-124313-m02) Calling .GetState
	I0108 13:00:56.979623    8112 main.go:134] libmachine: (multinode-124313-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0108 13:00:56.979688    8112 main.go:134] libmachine: (multinode-124313-m02) DBG | hyperkit pid from json: 7525
	I0108 13:00:56.980574    8112 main.go:134] libmachine: (multinode-124313-m02) DBG | hyperkit pid 7525 missing from process table
	I0108 13:00:56.980604    8112 status.go:330] multinode-124313-m02 host status = "Stopped" (err=<nil>)
	I0108 13:00:56.980611    8112 status.go:343] host is not running, skipping remaining checks
	I0108 13:00:56.980616    8112 status.go:257] multinode-124313-m02 status: &{Name:multinode-124313-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (3.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (555.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-124313 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0108 13:04:00.475688    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
E0108 13:04:48.625706    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
E0108 13:05:50.371371    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
E0108 13:07:51.674536    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
E0108 13:09:00.475349    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
E0108 13:09:48.625631    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-124313 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (9m14.818281899s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124313 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (555.15s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-124313
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-124313-m02 --driver=hyperkit 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-124313-m02 --driver=hyperkit : exit status 14 (437.344151ms)

                                                
                                                
-- stdout --
	* [multinode-124313-m02] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3013/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-124313-m02' is duplicated with machine name 'multinode-124313-m02' in profile 'multinode-124313'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-124313-m03 --driver=hyperkit 
E0108 13:10:50.370644    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-124313-m03 --driver=hyperkit : (41.385007889s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-124313
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-124313: exit status 80 (262.388216ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-124313
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-124313-m03 already exists in multinode-124313-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-124313-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-124313-m03: (5.27185454s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.41s)

                                                
                                    
x
+
TestPreload (142.44s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-131103 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-131103 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m6.303730698s)
preload_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-131103 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-131103 -- docker pull gcr.io/k8s-minikube/busybox: (2.236802915s)
preload_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-131103 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit  --kubernetes-version=v1.24.6
preload_test.go:67: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-131103 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit  --kubernetes-version=v1.24.6: (1m8.460281489s)
preload_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-131103 -- docker images
helpers_test.go:175: Cleaning up "test-preload-131103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-131103
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-131103: (5.274066051s)
--- PASS: TestPreload (142.44s)

                                                
                                    
x
+
TestScheduledStopUnix (108.13s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-131325 --memory=2048 --driver=hyperkit 
E0108 13:13:53.470481    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
E0108 13:14:00.521359    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-131325 --memory=2048 --driver=hyperkit : (36.614480024s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-131325 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-131325 -n scheduled-stop-131325
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-131325 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-131325 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-131325 -n scheduled-stop-131325
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-131325
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-131325 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0108 13:14:48.673110    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-131325
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-131325: exit status 7 (69.475212ms)

                                                
                                                
-- stdout --
	scheduled-stop-131325
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-131325 -n scheduled-stop-131325
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-131325 -n scheduled-stop-131325: exit status 7 (65.329822ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-131325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-131325
--- PASS: TestScheduledStopUnix (108.13s)

                                                
                                    
x
+
TestSkaffold (75.55s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe472112869 version
skaffold_test.go:63: skaffold version: v2.0.4
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-131513 --memory=2600 --driver=hyperkit 
E0108 13:15:50.416349    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-131513 --memory=2600 --driver=hyperkit : (38.124621286s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe472112869 run --minikube-profile skaffold-131513 --kube-context skaffold-131513 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe472112869 run --minikube-profile skaffold-131513 --kube-context skaffold-131513 --status-check=true --port-forward=false --interactive=false: (17.296076032s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-545d78d875-pfjgd" [862ce981-7a6d-4545-85a1-15610a120c9f] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.010241677s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-58f7fd8c6f-js87v" [826981cf-31ff-46cc-b0d4-3394d501318d] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006778492s
helpers_test.go:175: Cleaning up "skaffold-131513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-131513
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-131513: (3.391057964s)
--- PASS: TestSkaffold (75.55s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (156.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.2402837385.exe start -p running-upgrade-131911 --memory=2200 --vm-driver=hyperkit 
E0108 13:19:48.672008    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
version_upgrade_test.go:127: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.2402837385.exe start -p running-upgrade-131911 --memory=2200 --vm-driver=hyperkit : (1m34.464212701s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-131911 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0108 13:20:50.418966    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
E0108 13:21:15.999384    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/skaffold-131513/client.crt: no such file or directory
E0108 13:21:16.004720    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/skaffold-131513/client.crt: no such file or directory
E0108 13:21:16.014945    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/skaffold-131513/client.crt: no such file or directory
E0108 13:21:16.036290    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/skaffold-131513/client.crt: no such file or directory
E0108 13:21:16.077642    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/skaffold-131513/client.crt: no such file or directory
E0108 13:21:16.158132    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/skaffold-131513/client.crt: no such file or directory
E0108 13:21:16.319104    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/skaffold-131513/client.crt: no such file or directory
E0108 13:21:16.641329    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/skaffold-131513/client.crt: no such file or directory
E0108 13:21:17.281620    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/skaffold-131513/client.crt: no such file or directory
E0108 13:21:18.563679    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/skaffold-131513/client.crt: no such file or directory
E0108 13:21:21.124585    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/skaffold-131513/client.crt: no such file or directory
E0108 13:21:26.246912    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/skaffold-131513/client.crt: no such file or directory
E0108 13:21:36.488501    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/skaffold-131513/client.crt: no such file or directory
version_upgrade_test.go:137: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-131911 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (55.271547567s)
helpers_test.go:175: Cleaning up "running-upgrade-131911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-131911
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-131911: (5.27424327s)
--- PASS: TestRunningBinaryUpgrade (156.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (138.23s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-132147 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-132147 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit : (1m10.708363391s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-132147
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-132147: (2.242491318s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-132147 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-132147 status --format={{.Host}}: exit status 7 (68.94762ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-132147 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:250: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-132147 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=hyperkit : (36.727991219s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-132147 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-132147 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit 
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-132147 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit : exit status 106 (423.432892ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-132147] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3013/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.25.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-132147
	    minikube start -p kubernetes-upgrade-132147 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1321472 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.25.3, by running:
	    
	    minikube start -p kubernetes-upgrade-132147 --kubernetes-version=v1.25.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-132147 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=hyperkit 
E0108 13:23:59.852525    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/skaffold-131513/client.crt: no such file or directory
E0108 13:24:00.524724    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-132147 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=hyperkit : (24.513974859s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-132147" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-132147
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-132147: (3.496091977s)
--- PASS: TestKubernetesUpgrade (138.23s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.92s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.28.0 on darwin
- MINIKUBE_LOCATION=15565
- KUBECONFIG=/Users/jenkins/minikube-integration/15565-3013/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2461150158/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2461150158/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2461150158/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2461150158/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.92s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.11s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.28.0 on darwin
- MINIKUBE_LOCATION=15565
- KUBECONFIG=/Users/jenkins/minikube-integration/15565-3013/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1968799050/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1968799050/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1968799050/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1968799050/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (180.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.4243425253.exe start -p stopped-upgrade-132230 --memory=2200 --vm-driver=hyperkit 
E0108 13:22:37.930749    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/skaffold-131513/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.4243425253.exe start -p stopped-upgrade-132230 --memory=2200 --vm-driver=hyperkit : (1m27.880555154s)
version_upgrade_test.go:199: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.4243425253.exe -p stopped-upgrade-132230 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:199: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.4243425253.exe -p stopped-upgrade-132230 stop: (8.069681182s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-132230 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0108 13:24:31.723701    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
E0108 13:24:48.673370    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-132230 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m24.330654382s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (180.28s)

                                                
                                    
x
+
TestPause/serial/Start (52.86s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-132406 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-132406 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : (52.86215575s)
--- PASS: TestPause/serial/Start (52.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-132230
version_upgrade_test.go:213: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-132230: (3.118718867s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-132541 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-132541 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (437.201186ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-132541] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3013/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3013/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-132541 --driver=hyperkit 

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-132541 --driver=hyperkit : (43.548245062s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-132541 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (55.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-131629 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=hyperkit 
E0108 13:26:15.999945    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/skaffold-131513/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p auto-131629 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=hyperkit : (55.892488403s)
--- PASS: TestNetworkPlugins/group/auto/Start (55.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-132541 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-132541 --no-kubernetes --driver=hyperkit : (13.820030591s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-132541 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-132541 status -o json: exit status 2 (145.255999ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-132541","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-132541
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-132541: (2.415338183s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (14.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-132541 --no-kubernetes --driver=hyperkit 
E0108 13:26:43.693540    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/skaffold-131513/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-132541 --no-kubernetes --driver=hyperkit : (14.286911711s)
--- PASS: TestNoKubernetes/serial/Start (14.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-131629 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-131629 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-rtpjq" [4a93b892-295b-4e11-b4fd-835b8761b4f2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-rtpjq" [4a93b892-295b-4e11-b4fd-835b8761b4f2] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.006734638s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-132541 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-132541 "sudo systemctl is-active --quiet service kubelet": exit status 1 (124.832349ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-132541
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-132541: (2.233553523s)
--- PASS: TestNoKubernetes/serial/Stop (2.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (14.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-132541 --driver=hyperkit 

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-132541 --driver=hyperkit : (14.800082833s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (14.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-131629 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.105343011s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-132541 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-132541 "sudo systemctl is-active --quiet service kubelet": exit status 1 (123.904723ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (103.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-131629 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=hyperkit 

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p cilium-131629 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=hyperkit : (1m43.173037854s)
--- PASS: TestNetworkPlugins/group/cilium/Start (103.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (316.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-131629 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=hyperkit 

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p calico-131629 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=hyperkit : (5m16.43768209s)
--- PASS: TestNetworkPlugins/group/calico/Start (316.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-8kpvd" [81dbabba-b07a-4445-9db5-5be7dd6a408e] Running
E0108 13:29:00.524566    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.012183582s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cilium-131629 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (12.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-131629 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-zc2l5" [14806c49-7f65-47a7-9729-6e67218a0d36] Pending
helpers_test.go:342: "netcat-5788d667bd-zc2l5" [14806c49-7f65-47a7-9729-6e67218a0d36] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-zc2l5" [14806c49-7f65-47a7-9729-6e67218a0d36] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 12.006328516s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (12.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-131629 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (91.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-131629 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
E0108 13:29:48.674415    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
E0108 13:30:33.498277    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
E0108 13:30:50.420925    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-131629 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (1m31.197596804s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (91.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-131629 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context custom-flannel-131629 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-zfm5p" [ffd73856-4736-43d6-9171-007130b27388] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-zfm5p" [ffd73856-4736-43d6-9171-007130b27388] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.006654248s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:169: (dbg) Run:  kubectl --context custom-flannel-131629 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:188: (dbg) Run:  kubectl --context custom-flannel-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:238: (dbg) Run:  kubectl --context custom-flannel-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (53.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p false-131629 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=hyperkit 
E0108 13:31:16.002187    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/skaffold-131513/client.crt: no such file or directory
E0108 13:31:55.607753    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/auto-131629/client.crt: no such file or directory
E0108 13:31:55.613635    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/auto-131629/client.crt: no such file or directory
E0108 13:31:55.623761    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/auto-131629/client.crt: no such file or directory
E0108 13:31:55.644067    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/auto-131629/client.crt: no such file or directory
E0108 13:31:55.685197    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/auto-131629/client.crt: no such file or directory
E0108 13:31:55.766780    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/auto-131629/client.crt: no such file or directory
E0108 13:31:55.927702    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/auto-131629/client.crt: no such file or directory
E0108 13:31:56.248033    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/auto-131629/client.crt: no such file or directory
E0108 13:31:56.888966    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/auto-131629/client.crt: no such file or directory
E0108 13:31:58.169754    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/auto-131629/client.crt: no such file or directory
E0108 13:32:00.731291    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/auto-131629/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p false-131629 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=hyperkit : (53.757723128s)
--- PASS: TestNetworkPlugins/group/false/Start (53.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-131629 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-131629 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-lcm6d" [45557357-2e86-47b3-939f-46576605d303] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 13:32:05.852002    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/auto-131629/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-lcm6d" [45557357-2e86-47b3-939f-46576605d303] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.007623178s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-131629 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:188: (dbg) Run:  kubectl --context false-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Run:  kubectl --context false-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0108 13:32:16.092543    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/auto-131629/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context false-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.098109158s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (62.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-131629 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=hyperkit 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-131629 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=hyperkit : (1m2.094558745s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (62.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-g72sw" [6e43d021-1461-4a84-86a9-7aaa4e506d75] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E0108 13:32:36.574120    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/auto-131629/client.crt: no such file or directory
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.012913902s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-131629 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-131629 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-65s5t" [a0e17247-a549-40c9-94b4-28a85760ef6e] Pending
helpers_test.go:342: "netcat-5788d667bd-65s5t" [a0e17247-a549-40c9-94b4-28a85760ef6e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-65s5t" [a0e17247-a549-40c9-94b4-28a85760ef6e] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.006818216s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-131629 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:188: (dbg) Run:  kubectl --context calico-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:238: (dbg) Run:  kubectl --context calico-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (57.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-131629 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=hyperkit 
E0108 13:33:17.535241    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/auto-131629/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-131629 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=hyperkit : (57.946611514s)
--- PASS: TestNetworkPlugins/group/flannel/Start (57.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-cpx2n" [5930500f-6926-4120-9e0d-c16c508ae6fb] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.010701208s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-131629 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-131629 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-kcrgz" [ad7271fa-9d48-448c-a9fe-0ef909fb1793] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-kcrgz" [ad7271fa-9d48-448c-a9fe-0ef909fb1793] Running
E0108 13:33:43.582766    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004920272s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-131629 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (56.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-131629 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=hyperkit 

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-131629 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=hyperkit : (56.868955764s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (56.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-system" ...
helpers_test.go:342: "kube-flannel-ds-amd64-8nj65" [66b9b48c-ed37-4f37-9509-e17b7a314b3d] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.012387153s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-131629 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context flannel-131629 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-4pbwb" [bd1a7d31-4746-4a22-bacd-d8f103b4a2b2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 13:33:58.741459    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/cilium-131629/client.crt: no such file or directory
E0108 13:33:58.746532    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/cilium-131629/client.crt: no such file or directory
E0108 13:33:58.756581    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/cilium-131629/client.crt: no such file or directory
E0108 13:33:58.777278    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/cilium-131629/client.crt: no such file or directory
E0108 13:33:58.817933    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/cilium-131629/client.crt: no such file or directory
E0108 13:33:58.898640    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/cilium-131629/client.crt: no such file or directory
E0108 13:33:59.058754    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/cilium-131629/client.crt: no such file or directory
E0108 13:33:59.379092    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/cilium-131629/client.crt: no such file or directory
E0108 13:34:00.019501    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/cilium-131629/client.crt: no such file or directory
E0108 13:34:00.524957    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
E0108 13:34:01.300789    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/cilium-131629/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-4pbwb" [bd1a7d31-4746-4a22-bacd-d8f103b4a2b2] Running
E0108 13:34:03.861934    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/cilium-131629/client.crt: no such file or directory
E0108 13:34:08.982067    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/cilium-131629/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.007258629s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:169: (dbg) Run:  kubectl --context flannel-131629 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:188: (dbg) Run:  kubectl --context flannel-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:238: (dbg) Run:  kubectl --context flannel-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (54.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-131629 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=hyperkit 
E0108 13:34:19.222828    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/cilium-131629/client.crt: no such file or directory
E0108 13:34:39.457312    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/auto-131629/client.crt: no such file or directory
E0108 13:34:39.703895    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/cilium-131629/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-131629 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=hyperkit : (54.384604327s)
--- PASS: TestNetworkPlugins/group/bridge/Start (54.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-131629 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-131629 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-v8vkf" [fb0044e1-3b43-4739-ac1f-29c1beb7e96a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 13:34:48.676895    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-v8vkf" [fb0044e1-3b43-4739-ac1f-29c1beb7e96a] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.00720363s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-131629 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (52.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-131629 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=hyperkit 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-131629 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=hyperkit : (52.679297187s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (52.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-131629 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-131629 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-tnxjh" [7e36de89-4673-455b-8d57-0b6c8402df63] Pending
helpers_test.go:342: "netcat-5788d667bd-tnxjh" [7e36de89-4673-455b-8d57-0b6c8402df63] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-tnxjh" [7e36de89-4673-455b-8d57-0b6c8402df63] Running
E0108 13:35:20.666140    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/cilium-131629/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.004366205s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-131629 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (156.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-133527 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
E0108 13:35:50.421528    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
E0108 13:35:52.018609    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/custom-flannel-131629/client.crt: no such file or directory
E0108 13:35:52.023839    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/custom-flannel-131629/client.crt: no such file or directory
E0108 13:35:52.034613    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/custom-flannel-131629/client.crt: no such file or directory
E0108 13:35:52.056668    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/custom-flannel-131629/client.crt: no such file or directory
E0108 13:35:52.096890    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/custom-flannel-131629/client.crt: no such file or directory
E0108 13:35:52.176993    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/custom-flannel-131629/client.crt: no such file or directory
E0108 13:35:52.338177    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/custom-flannel-131629/client.crt: no such file or directory
E0108 13:35:52.659789    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/custom-flannel-131629/client.crt: no such file or directory
E0108 13:35:53.302027    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/custom-flannel-131629/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-133527 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (2m36.264662362s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (156.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-131629 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-131629 replace --force -f testdata/netcat-deployment.yaml
E0108 13:35:54.582244    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/custom-flannel-131629/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-fr5kj" [59402520-d567-49a4-aaeb-840dbf0b687d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 13:35:57.143554    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/custom-flannel-131629/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-fr5kj" [59402520-d567-49a4-aaeb-840dbf0b687d] Running
E0108 13:36:02.264952    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/custom-flannel-131629/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.004437618s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-131629 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kubenet-131629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-133708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.25.3
E0108 13:37:08.890893    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/false-131629/client.crt: no such file or directory
E0108 13:37:13.949364    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/custom-flannel-131629/client.crt: no such file or directory
E0108 13:37:14.012654    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/false-131629/client.crt: no such file or directory
E0108 13:37:23.298545    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/auto-131629/client.crt: no such file or directory
E0108 13:37:24.253695    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/false-131629/client.crt: no such file or directory
E0108 13:37:32.779921    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/calico-131629/client.crt: no such file or directory
E0108 13:37:32.784962    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/calico-131629/client.crt: no such file or directory
E0108 13:37:32.795090    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/calico-131629/client.crt: no such file or directory
E0108 13:37:32.816495    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/calico-131629/client.crt: no such file or directory
E0108 13:37:32.856613    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/calico-131629/client.crt: no such file or directory
E0108 13:37:32.938201    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/calico-131629/client.crt: no such file or directory
E0108 13:37:33.098289    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/calico-131629/client.crt: no such file or directory
E0108 13:37:33.418980    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/calico-131629/client.crt: no such file or directory
E0108 13:37:34.060652    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/calico-131629/client.crt: no such file or directory
E0108 13:37:35.341250    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/calico-131629/client.crt: no such file or directory
E0108 13:37:37.903290    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/calico-131629/client.crt: no such file or directory
E0108 13:37:39.056422    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/skaffold-131513/client.crt: no such file or directory
E0108 13:37:43.024993    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/calico-131629/client.crt: no such file or directory
E0108 13:37:44.735314    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/false-131629/client.crt: no such file or directory
E0108 13:37:53.265985    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/calico-131629/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-133708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.25.3: (1m6.149430453s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-133527 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [cb098195-67ef-4414-9ae5-cd5b90b77f37] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [cb098195-67ef-4414-9ae5-cd5b90b77f37] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.020781405s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-133527 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-133527 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-133527 describe deploy/metrics-server -n kube-system
E0108 13:38:13.746607    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/calico-131629/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-133527 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-133527 --alsologtostderr -v=3: (1.235451276s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-133708 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [d7757e37-390f-46f3-a3b1-b0864c44a38f] Pending

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:342: "busybox" [d7757e37-390f-46f3-a3b1-b0864c44a38f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [d7757e37-390f-46f3-a3b1-b0864c44a38f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.016080421s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-133708 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-133527 -n old-k8s-version-133527

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-133527 -n old-k8s-version-133527: exit status 7 (74.134277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-133527 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (453.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-133527 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-133527 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (7m33.388131541s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-133527 -n old-k8s-version-133527
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (453.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-133708 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0108 13:38:25.695708    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/false-131629/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-133708 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-133708 --alsologtostderr -v=3
E0108 13:38:27.513256    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kindnet-131629/client.crt: no such file or directory
E0108 13:38:27.518547    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kindnet-131629/client.crt: no such file or directory
E0108 13:38:27.530800    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kindnet-131629/client.crt: no such file or directory
E0108 13:38:27.552271    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kindnet-131629/client.crt: no such file or directory
E0108 13:38:27.593071    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kindnet-131629/client.crt: no such file or directory
E0108 13:38:27.673599    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kindnet-131629/client.crt: no such file or directory
E0108 13:38:27.835735    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kindnet-131629/client.crt: no such file or directory
E0108 13:38:28.157019    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kindnet-131629/client.crt: no such file or directory
E0108 13:38:28.799353    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kindnet-131629/client.crt: no such file or directory
E0108 13:38:30.081024    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kindnet-131629/client.crt: no such file or directory
E0108 13:38:32.641374    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kindnet-131629/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-133708 --alsologtostderr -v=3: (8.243380718s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-133708 -n no-preload-133708
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-133708 -n no-preload-133708: exit status 7 (65.988365ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-133708 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (313.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-133708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.25.3
E0108 13:38:35.870099    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/custom-flannel-131629/client.crt: no such file or directory
E0108 13:38:37.762147    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kindnet-131629/client.crt: no such file or directory
E0108 13:38:48.002526    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kindnet-131629/client.crt: no such file or directory
E0108 13:38:52.331527    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/flannel-131629/client.crt: no such file or directory
E0108 13:38:52.337855    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/flannel-131629/client.crt: no such file or directory
E0108 13:38:52.349954    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/flannel-131629/client.crt: no such file or directory
E0108 13:38:52.372203    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/flannel-131629/client.crt: no such file or directory
E0108 13:38:52.412395    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/flannel-131629/client.crt: no such file or directory
E0108 13:38:52.494630    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/flannel-131629/client.crt: no such file or directory
E0108 13:38:52.656808    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/flannel-131629/client.crt: no such file or directory
E0108 13:38:52.978930    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/flannel-131629/client.crt: no such file or directory
E0108 13:38:53.619491    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/flannel-131629/client.crt: no such file or directory
E0108 13:38:54.708542    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/calico-131629/client.crt: no such file or directory
E0108 13:38:54.902212    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/flannel-131629/client.crt: no such file or directory
E0108 13:38:57.465449    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/flannel-131629/client.crt: no such file or directory
E0108 13:38:58.746653    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/cilium-131629/client.crt: no such file or directory
E0108 13:39:00.532556    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
E0108 13:39:02.589767    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/flannel-131629/client.crt: no such file or directory
E0108 13:39:08.493759    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kindnet-131629/client.crt: no such file or directory
E0108 13:39:12.835110    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/flannel-131629/client.crt: no such file or directory
E0108 13:39:26.444975    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/cilium-131629/client.crt: no such file or directory
E0108 13:39:33.319801    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/flannel-131629/client.crt: no such file or directory
E0108 13:39:46.723512    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/enable-default-cni-131629/client.crt: no such file or directory
E0108 13:39:46.729636    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/enable-default-cni-131629/client.crt: no such file or directory
E0108 13:39:46.739864    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/enable-default-cni-131629/client.crt: no such file or directory
E0108 13:39:46.762070    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/enable-default-cni-131629/client.crt: no such file or directory
E0108 13:39:46.802814    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/enable-default-cni-131629/client.crt: no such file or directory
E0108 13:39:46.883901    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/enable-default-cni-131629/client.crt: no such file or directory
E0108 13:39:47.044760    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/enable-default-cni-131629/client.crt: no such file or directory
E0108 13:39:47.365956    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/enable-default-cni-131629/client.crt: no such file or directory
E0108 13:39:47.633442    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/false-131629/client.crt: no such file or directory
E0108 13:39:48.006115    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/enable-default-cni-131629/client.crt: no such file or directory
E0108 13:39:48.691889    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
E0108 13:39:49.287668    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/enable-default-cni-131629/client.crt: no such file or directory
E0108 13:39:49.461096    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kindnet-131629/client.crt: no such file or directory
E0108 13:39:51.848928    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/enable-default-cni-131629/client.crt: no such file or directory
E0108 13:39:56.969901    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/enable-default-cni-131629/client.crt: no such file or directory
E0108 13:40:07.212355    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/enable-default-cni-131629/client.crt: no such file or directory
E0108 13:40:10.496129    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/bridge-131629/client.crt: no such file or directory
E0108 13:40:10.501433    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/bridge-131629/client.crt: no such file or directory
E0108 13:40:10.513644    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/bridge-131629/client.crt: no such file or directory
E0108 13:40:10.535821    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/bridge-131629/client.crt: no such file or directory
E0108 13:40:10.576218    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/bridge-131629/client.crt: no such file or directory
E0108 13:40:10.658043    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/bridge-131629/client.crt: no such file or directory
E0108 13:40:10.818845    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/bridge-131629/client.crt: no such file or directory
E0108 13:40:11.139900    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/bridge-131629/client.crt: no such file or directory
E0108 13:40:11.782156    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/bridge-131629/client.crt: no such file or directory
E0108 13:40:13.064392    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/bridge-131629/client.crt: no such file or directory
E0108 13:40:14.282254    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/flannel-131629/client.crt: no such file or directory
E0108 13:40:15.624531    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/bridge-131629/client.crt: no such file or directory
E0108 13:40:16.644030    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/calico-131629/client.crt: no such file or directory
E0108 13:40:20.745688    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/bridge-131629/client.crt: no such file or directory
E0108 13:40:27.693762    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/enable-default-cni-131629/client.crt: no such file or directory
E0108 13:40:30.986148    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/bridge-131629/client.crt: no such file or directory
E0108 13:40:50.439690    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
E0108 13:40:51.466506    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/bridge-131629/client.crt: no such file or directory
E0108 13:40:52.034714    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/custom-flannel-131629/client.crt: no such file or directory
E0108 13:40:54.626119    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kubenet-131629/client.crt: no such file or directory
E0108 13:40:54.632392    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kubenet-131629/client.crt: no such file or directory
E0108 13:40:54.642517    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kubenet-131629/client.crt: no such file or directory
E0108 13:40:54.662707    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kubenet-131629/client.crt: no such file or directory
E0108 13:40:54.704562    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kubenet-131629/client.crt: no such file or directory
E0108 13:40:54.784733    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kubenet-131629/client.crt: no such file or directory
E0108 13:40:54.945127    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kubenet-131629/client.crt: no such file or directory
E0108 13:40:55.266079    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kubenet-131629/client.crt: no such file or directory
E0108 13:40:55.906348    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kubenet-131629/client.crt: no such file or directory
E0108 13:40:57.186596    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kubenet-131629/client.crt: no such file or directory
E0108 13:40:59.748154    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kubenet-131629/client.crt: no such file or directory
E0108 13:41:04.868432    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kubenet-131629/client.crt: no such file or directory
E0108 13:41:08.655343    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/enable-default-cni-131629/client.crt: no such file or directory
E0108 13:41:11.383216    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kindnet-131629/client.crt: no such file or directory
E0108 13:41:11.744509    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
E0108 13:41:15.109745    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kubenet-131629/client.crt: no such file or directory
E0108 13:41:16.021362    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/skaffold-131513/client.crt: no such file or directory
E0108 13:41:19.728068    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/custom-flannel-131629/client.crt: no such file or directory
E0108 13:41:32.427546    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/bridge-131629/client.crt: no such file or directory
E0108 13:41:35.590173    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kubenet-131629/client.crt: no such file or directory
E0108 13:41:36.203640    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/flannel-131629/client.crt: no such file or directory
E0108 13:41:55.624291    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/auto-131629/client.crt: no such file or directory
E0108 13:42:03.785978    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/false-131629/client.crt: no such file or directory
E0108 13:42:16.550764    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kubenet-131629/client.crt: no such file or directory
E0108 13:42:30.578079    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/enable-default-cni-131629/client.crt: no such file or directory
E0108 13:42:31.474989    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/false-131629/client.crt: no such file or directory
E0108 13:42:32.796908    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/calico-131629/client.crt: no such file or directory
E0108 13:42:54.348243    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/bridge-131629/client.crt: no such file or directory
E0108 13:43:00.485431    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/calico-131629/client.crt: no such file or directory
E0108 13:43:27.529633    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kindnet-131629/client.crt: no such file or directory
E0108 13:43:38.472858    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kubenet-131629/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-133708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.25.3: (5m13.723434118s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-133708 -n no-preload-133708
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (313.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-ll4kq" [da68cb97-0684-447d-b859-ad3d15cf9705] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0108 13:43:52.348321    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/flannel-131629/client.crt: no such file or directory
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-ll4kq" [da68cb97-0684-447d-b859-ad3d15cf9705] Running
E0108 13:43:55.224287    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kindnet-131629/client.crt: no such file or directory
E0108 13:43:58.760158    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/cilium-131629/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.008834333s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-ll4kq" [da68cb97-0684-447d-b859-ad3d15cf9705] Running
E0108 13:44:00.545552    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009623191s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-133708 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-133708 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (1.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-133708 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-133708 -n no-preload-133708
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-133708 -n no-preload-133708: exit status 2 (154.868456ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-133708 -n no-preload-133708
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-133708 -n no-preload-133708: exit status 2 (156.004513ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-133708 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-133708 -n no-preload-133708
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-133708 -n no-preload-133708
--- PASS: TestStartStop/group/no-preload/serial/Pause (1.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (57.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-134412 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.25.3
E0108 13:44:20.045327    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/flannel-131629/client.crt: no such file or directory
E0108 13:44:46.724216    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/enable-default-cni-131629/client.crt: no such file or directory
E0108 13:44:48.694102    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-134412 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.25.3: (57.966133058s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (57.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-134412 create -f testdata/busybox.yaml
E0108 13:45:10.496273    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/bridge-131629/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [05db2cf1-ddbf-4f2f-9120-24d541844647] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0108 13:45:14.419177    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/enable-default-cni-131629/client.crt: no such file or directory
helpers_test.go:342: "busybox" [05db2cf1-ddbf-4f2f-9120-24d541844647] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.017557336s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-134412 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-134412 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-134412 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-134412 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-134412 --alsologtostderr -v=3: (8.251213998s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-134412 -n embed-certs-134412
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-134412 -n embed-certs-134412: exit status 7 (66.467366ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-134412 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (324.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-134412 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.25.3
E0108 13:45:38.190456    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/bridge-131629/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-134412 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.25.3: (5m24.2164456s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-134412 -n embed-certs-134412
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (324.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-84b68f675b-2jpxc" [31d6ffe6-2dbc-4d4b-8e5a-0ddc8099e609] Running
E0108 13:45:50.439379    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
E0108 13:45:52.035822    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/custom-flannel-131629/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009427387s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-84b68f675b-2jpxc" [31d6ffe6-2dbc-4d4b-8e5a-0ddc8099e609] Running
E0108 13:45:54.628961    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kubenet-131629/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004417494s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-133527 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p old-k8s-version-133527 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (1.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-133527 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-133527 -n old-k8s-version-133527
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-133527 -n old-k8s-version-133527: exit status 2 (158.832629ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-133527 -n old-k8s-version-133527
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-133527 -n old-k8s-version-133527: exit status 2 (160.120326ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-133527 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-133527 -n old-k8s-version-133527
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-133527 -n old-k8s-version-133527
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (1.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (64.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-134607 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.25.3
E0108 13:46:16.021483    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/skaffold-131513/client.crt: no such file or directory
E0108 13:46:22.330134    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kubenet-131629/client.crt: no such file or directory
E0108 13:46:55.626117    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/auto-131629/client.crt: no such file or directory
E0108 13:47:03.788543    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/false-131629/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-134607 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.25.3: (1m4.934759093s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (64.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-134607 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [2bb84abb-933e-420d-9ee9-bc4e9933d890] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0108 13:47:13.521197    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
helpers_test.go:342: "busybox" [2bb84abb-933e-420d-9ee9-bc4e9933d890] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.016922941s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-134607 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-134607 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-134607 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-134607 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-134607 --alsologtostderr -v=3: (8.243530173s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-134607 -n default-k8s-diff-port-134607
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-134607 -n default-k8s-diff-port-134607: exit status 7 (67.55019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-134607 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (320.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-134607 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.25.3
E0108 13:47:32.799462    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/calico-131629/client.crt: no such file or directory
E0108 13:48:03.965303    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/old-k8s-version-133527/client.crt: no such file or directory
E0108 13:48:03.971754    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/old-k8s-version-133527/client.crt: no such file or directory
E0108 13:48:03.983304    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/old-k8s-version-133527/client.crt: no such file or directory
E0108 13:48:04.004979    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/old-k8s-version-133527/client.crt: no such file or directory
E0108 13:48:04.045892    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/old-k8s-version-133527/client.crt: no such file or directory
E0108 13:48:04.126321    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/old-k8s-version-133527/client.crt: no such file or directory
E0108 13:48:04.286972    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/old-k8s-version-133527/client.crt: no such file or directory
E0108 13:48:04.607763    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/old-k8s-version-133527/client.crt: no such file or directory
E0108 13:48:05.249608    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/old-k8s-version-133527/client.crt: no such file or directory
E0108 13:48:06.531281    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/old-k8s-version-133527/client.crt: no such file or directory
E0108 13:48:09.092720    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/old-k8s-version-133527/client.crt: no such file or directory
E0108 13:48:14.213968    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/old-k8s-version-133527/client.crt: no such file or directory
E0108 13:48:15.099274    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/no-preload-133708/client.crt: no such file or directory
E0108 13:48:15.104471    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/no-preload-133708/client.crt: no such file or directory
E0108 13:48:15.115482    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/no-preload-133708/client.crt: no such file or directory
E0108 13:48:15.135632    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/no-preload-133708/client.crt: no such file or directory
E0108 13:48:15.176740    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/no-preload-133708/client.crt: no such file or directory
E0108 13:48:15.256914    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/no-preload-133708/client.crt: no such file or directory
E0108 13:48:15.417107    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/no-preload-133708/client.crt: no such file or directory
E0108 13:48:15.738843    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/no-preload-133708/client.crt: no such file or directory
E0108 13:48:16.381067    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/no-preload-133708/client.crt: no such file or directory
E0108 13:48:17.662120    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/no-preload-133708/client.crt: no such file or directory
E0108 13:48:18.679305    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/auto-131629/client.crt: no such file or directory
E0108 13:48:20.223955    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/no-preload-133708/client.crt: no such file or directory
E0108 13:48:24.454205    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/old-k8s-version-133527/client.crt: no such file or directory
E0108 13:48:25.345052    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/no-preload-133708/client.crt: no such file or directory
E0108 13:48:27.531692    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kindnet-131629/client.crt: no such file or directory
E0108 13:48:35.585989    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/no-preload-133708/client.crt: no such file or directory
E0108 13:48:44.934985    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/old-k8s-version-133527/client.crt: no such file or directory
E0108 13:48:52.349871    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/flannel-131629/client.crt: no such file or directory
E0108 13:48:56.066247    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/no-preload-133708/client.crt: no such file or directory
E0108 13:48:58.763316    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/cilium-131629/client.crt: no such file or directory
E0108 13:49:00.546193    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
E0108 13:49:25.897518    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/old-k8s-version-133527/client.crt: no such file or directory
E0108 13:49:37.026753    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/no-preload-133708/client.crt: no such file or directory
E0108 13:49:46.727161    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/enable-default-cni-131629/client.crt: no such file or directory
E0108 13:49:48.696028    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/addons-122738/client.crt: no such file or directory
E0108 13:50:10.498408    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/bridge-131629/client.crt: no such file or directory
E0108 13:50:21.810510    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/cilium-131629/client.crt: no such file or directory
E0108 13:50:23.604957    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/ingress-addon-legacy-123659/client.crt: no such file or directory
E0108 13:50:47.818170    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/old-k8s-version-133527/client.crt: no such file or directory
E0108 13:50:50.442648    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
E0108 13:50:52.038317    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/custom-flannel-131629/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-134607 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.25.3: (5m20.563921149s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-134607 -n default-k8s-diff-port-134607
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (320.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-nrz5r" [f4b7fa32-e4d7-49a2-ad10-12af318080ea] Running
E0108 13:50:54.629678    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/kubenet-131629/client.crt: no such file or directory
E0108 13:50:58.948327    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/no-preload-133708/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010914485s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-nrz5r" [f4b7fa32-e4d7-49a2-ad10-12af318080ea] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007282605s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-134412 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-134412 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (1.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-134412 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-134412 -n embed-certs-134412
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-134412 -n embed-certs-134412: exit status 2 (157.991334ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-134412 -n embed-certs-134412
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-134412 -n embed-certs-134412: exit status 2 (154.937182ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-134412 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-134412 -n embed-certs-134412
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-134412 -n embed-certs-134412
--- PASS: TestStartStop/group/embed-certs/serial/Pause (1.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (53.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-135112 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=hyperkit  --kubernetes-version=v1.25.3
E0108 13:51:16.023445    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/skaffold-131513/client.crt: no such file or directory
E0108 13:51:55.629235    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/auto-131629/client.crt: no such file or directory
E0108 13:52:03.788878    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/false-131629/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-135112 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=hyperkit  --kubernetes-version=v1.25.3: (53.037590241s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (53.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-135112 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-135112 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-135112 --alsologtostderr -v=3: (8.318866473s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-135112 -n newest-cni-135112
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-135112 -n newest-cni-135112: exit status 7 (69.37728ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-135112 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-135112 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=hyperkit  --kubernetes-version=v1.25.3
E0108 13:52:15.091732    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/custom-flannel-131629/client.crt: no such file or directory
E0108 13:52:32.799980    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/calico-131629/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-135112 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=hyperkit  --kubernetes-version=v1.25.3: (31.622453079s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-135112 -n newest-cni-135112
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-135112 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-135112 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-135112 -n newest-cni-135112
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-135112 -n newest-cni-135112: exit status 2 (154.922258ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-135112 -n newest-cni-135112
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-135112 -n newest-cni-135112: exit status 2 (154.675659ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-135112 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-135112 -n newest-cni-135112
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-135112 -n newest-cni-135112
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-nspp4" [64745124-26b2-405d-a706-24a643ea1a27] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010379389s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-nspp4" [64745124-26b2-405d-a706-24a643ea1a27] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005433853s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-134607 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-134607 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (1.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-134607 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-134607 -n default-k8s-diff-port-134607
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-134607 -n default-k8s-diff-port-134607: exit status 2 (152.565302ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-134607 -n default-k8s-diff-port-134607
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-134607 -n default-k8s-diff-port-134607: exit status 2 (151.862839ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-134607 --alsologtostderr -v=1
E0108 13:53:03.965967    4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/old-k8s-version-133527/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-134607 -n default-k8s-diff-port-134607
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-134607 -n default-k8s-diff-port-134607
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (1.83s)

                                                
                                    

Test skip (16/301)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.25.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.25.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:214: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:455: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:291: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-134606" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-134606
--- SKIP: TestStartStop/group/disable-driver-mounts (0.41s)

                                                
                                    
Copied to clipboard