Test Report: Hyperkit_macOS 15642

                    
                      4cf467cecc4d49355139c24bc1420f3978a367dd:2023-01-14:27426
                    
                

Test fail (2/302)

Order failed test Duration
246 TestPause/serial/SecondStartNoReconfiguration 64.51
314 TestNetworkPlugins/group/kubenet/HairPin 52.87
x
+
TestPause/serial/SecondStartNoReconfiguration (64.51s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-030526 --alsologtostderr -v=1 --driver=hyperkit 
E0114 03:06:43.518124    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 03:07:04.703933    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:07:04.709189    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:07:04.719626    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:07:04.739791    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:07:04.781007    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:07:04.861161    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:07:05.021767    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:07:05.342918    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:07:05.984512    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:07:07.265099    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:07:09.825620    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-030526 --alsologtostderr -v=1 --driver=hyperkit : (58.07772579s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-030526] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1627/.minikube
	* Using the hyperkit driver based on existing profile
	* Starting control plane node pause-030526 in cluster pause-030526
	* Updating the running hyperkit "pause-030526" VM ...
	* Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	* Done! kubectl is now configured to use "pause-030526" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 03:06:28.687385    9157 out.go:296] Setting OutFile to fd 1 ...
	I0114 03:06:28.687591    9157 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 03:06:28.687599    9157 out.go:309] Setting ErrFile to fd 2...
	I0114 03:06:28.687603    9157 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 03:06:28.687745    9157 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1627/.minikube/bin
	I0114 03:06:28.688255    9157 out.go:303] Setting JSON to false
	I0114 03:06:28.710759    9157 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":3961,"bootTime":1673690427,"procs":419,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0114 03:06:28.710861    9157 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 03:06:28.734596    9157 out.go:177] * [pause-030526] minikube v1.28.0 on Darwin 13.0.1
	I0114 03:06:28.776842    9157 notify.go:220] Checking for updates...
	I0114 03:06:28.798601    9157 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 03:06:28.840798    9157 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1627/kubeconfig
	I0114 03:06:28.861619    9157 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0114 03:06:28.882772    9157 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 03:06:28.924484    9157 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1627/.minikube
	I0114 03:06:28.946117    9157 config.go:180] Loaded profile config "pause-030526": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 03:06:28.946536    9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 03:06:28.946565    9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 03:06:28.954173    9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52749
	I0114 03:06:28.954556    9157 main.go:134] libmachine: () Calling .GetVersion
	I0114 03:06:28.955008    9157 main.go:134] libmachine: Using API Version  1
	I0114 03:06:28.955021    9157 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 03:06:28.955307    9157 main.go:134] libmachine: () Calling .GetMachineName
	I0114 03:06:28.955441    9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
	I0114 03:06:28.955618    9157 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 03:06:28.955912    9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 03:06:28.955936    9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 03:06:28.963290    9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52751
	I0114 03:06:28.963705    9157 main.go:134] libmachine: () Calling .GetVersion
	I0114 03:06:28.964099    9157 main.go:134] libmachine: Using API Version  1
	I0114 03:06:28.964121    9157 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 03:06:28.964335    9157 main.go:134] libmachine: () Calling .GetMachineName
	I0114 03:06:28.964443    9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
	I0114 03:06:28.992792    9157 out.go:177] * Using the hyperkit driver based on existing profile
	I0114 03:06:29.034697    9157 start.go:294] selected driver: hyperkit
	I0114 03:06:29.034712    9157 start.go:838] validating driver "hyperkit" against &{Name:pause-030526 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.25.3 ClusterName:pause-030526 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.24 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 03:06:29.034854    9157 start.go:849] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 03:06:29.034912    9157 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 03:06:29.035024    9157 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15642-1627/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0114 03:06:29.042466    9157 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.28.0
	I0114 03:06:29.046033    9157 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 03:06:29.046055    9157 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0114 03:06:29.049158    9157 cni.go:95] Creating CNI manager for ""
	I0114 03:06:29.049177    9157 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 03:06:29.049194    9157 start_flags.go:319] config:
	{Name:pause-030526 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:pause-030526 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.24 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 03:06:29.049375    9157 iso.go:125] acquiring lock: {Name:mkf812bef4e208b28a360507a7c86d17e208f6c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 03:06:29.091586    9157 out.go:177] * Starting control plane node pause-030526 in cluster pause-030526
	I0114 03:06:29.112827    9157 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 03:06:29.112917    9157 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15642-1627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0114 03:06:29.112936    9157 cache.go:57] Caching tarball of preloaded images
	I0114 03:06:29.113063    9157 preload.go:174] Found /Users/jenkins/minikube-integration/15642-1627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 03:06:29.113081    9157 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0114 03:06:29.113170    9157 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/config.json ...
	I0114 03:06:29.113636    9157 cache.go:193] Successfully downloaded all kic artifacts
	I0114 03:06:29.113667    9157 start.go:364] acquiring machines lock for pause-030526: {Name:mkd798b4eb4b12534fdc8a3119639005936a788a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0114 03:06:29.113733    9157 start.go:368] acquired machines lock for "pause-030526" in 45.637µs
	I0114 03:06:29.113755    9157 start.go:96] Skipping create...Using existing machine configuration
	I0114 03:06:29.113766    9157 fix.go:55] fixHost starting: 
	I0114 03:06:29.114009    9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 03:06:29.114025    9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 03:06:29.121486    9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52753
	I0114 03:06:29.121864    9157 main.go:134] libmachine: () Calling .GetVersion
	I0114 03:06:29.122354    9157 main.go:134] libmachine: Using API Version  1
	I0114 03:06:29.122369    9157 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 03:06:29.122611    9157 main.go:134] libmachine: () Calling .GetMachineName
	I0114 03:06:29.122713    9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
	I0114 03:06:29.122814    9157 main.go:134] libmachine: (pause-030526) Calling .GetState
	I0114 03:06:29.122941    9157 main.go:134] libmachine: (pause-030526) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 03:06:29.123087    9157 main.go:134] libmachine: (pause-030526) DBG | hyperkit pid from json: 8992
	I0114 03:06:29.124181    9157 fix.go:103] recreateIfNeeded on pause-030526: state=Running err=<nil>
	W0114 03:06:29.124197    9157 fix.go:129] unexpected machine state, will restart: <nil>
	I0114 03:06:29.166509    9157 out.go:177] * Updating the running hyperkit "pause-030526" VM ...
	I0114 03:06:29.187761    9157 machine.go:88] provisioning docker machine ...
	I0114 03:06:29.187784    9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
	I0114 03:06:29.187932    9157 main.go:134] libmachine: (pause-030526) Calling .GetMachineName
	I0114 03:06:29.188023    9157 buildroot.go:166] provisioning hostname "pause-030526"
	I0114 03:06:29.188033    9157 main.go:134] libmachine: (pause-030526) Calling .GetMachineName
	I0114 03:06:29.188122    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
	I0114 03:06:29.188210    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
	I0114 03:06:29.188309    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
	I0114 03:06:29.188405    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
	I0114 03:06:29.188490    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
	I0114 03:06:29.188626    9157 main.go:134] libmachine: Using SSH client type: native
	I0114 03:06:29.188805    9157 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 192.168.64.24 22 <nil> <nil>}
	I0114 03:06:29.188818    9157 main.go:134] libmachine: About to run SSH command:
	sudo hostname pause-030526 && echo "pause-030526" | sudo tee /etc/hostname
	I0114 03:06:29.273398    9157 main.go:134] libmachine: SSH cmd err, output: <nil>: pause-030526
	
	I0114 03:06:29.273418    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
	I0114 03:06:29.273565    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
	I0114 03:06:29.273662    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
	I0114 03:06:29.273742    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
	I0114 03:06:29.273835    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
	I0114 03:06:29.273992    9157 main.go:134] libmachine: Using SSH client type: native
	I0114 03:06:29.274116    9157 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 192.168.64.24 22 <nil> <nil>}
	I0114 03:06:29.274129    9157 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-030526' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-030526/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-030526' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 03:06:29.348965    9157 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 03:06:29.348986    9157 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15642-1627/.minikube CaCertPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15642-1627/.minikube}
	I0114 03:06:29.349020    9157 buildroot.go:174] setting up certificates
	I0114 03:06:29.349033    9157 provision.go:83] configureAuth start
	I0114 03:06:29.349046    9157 main.go:134] libmachine: (pause-030526) Calling .GetMachineName
	I0114 03:06:29.349179    9157 main.go:134] libmachine: (pause-030526) Calling .GetIP
	I0114 03:06:29.349277    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
	I0114 03:06:29.349368    9157 provision.go:138] copyHostCerts
	I0114 03:06:29.349460    9157 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1627/.minikube/key.pem, removing ...
	I0114 03:06:29.349470    9157 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1627/.minikube/key.pem
	I0114 03:06:29.349604    9157 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15642-1627/.minikube/key.pem (1679 bytes)
	I0114 03:06:29.349818    9157 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1627/.minikube/ca.pem, removing ...
	I0114 03:06:29.349825    9157 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1627/.minikube/ca.pem
	I0114 03:06:29.349899    9157 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15642-1627/.minikube/ca.pem (1082 bytes)
	I0114 03:06:29.350084    9157 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1627/.minikube/cert.pem, removing ...
	I0114 03:06:29.350091    9157 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1627/.minikube/cert.pem
	I0114 03:06:29.350154    9157 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15642-1627/.minikube/cert.pem (1123 bytes)
	I0114 03:06:29.350278    9157 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15642-1627/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15642-1627/.minikube/certs/ca-key.pem org=jenkins.pause-030526 san=[192.168.64.24 192.168.64.24 localhost 127.0.0.1 minikube pause-030526]
	I0114 03:06:29.470936    9157 provision.go:172] copyRemoteCerts
	I0114 03:06:29.471020    9157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 03:06:29.471058    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
	I0114 03:06:29.471229    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
	I0114 03:06:29.471341    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
	I0114 03:06:29.471418    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
	I0114 03:06:29.471504    9157 sshutil.go:53] new ssh client: &{IP:192.168.64.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/pause-030526/id_rsa Username:docker}
	I0114 03:06:29.522818    9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0114 03:06:29.539830    9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0114 03:06:29.557141    9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0114 03:06:29.574682    9157 provision.go:86] duration metric: configureAuth took 225.633354ms
	I0114 03:06:29.574695    9157 buildroot.go:189] setting minikube options for container-runtime
	I0114 03:06:29.574864    9157 config.go:180] Loaded profile config "pause-030526": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 03:06:29.574903    9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
	I0114 03:06:29.575088    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
	I0114 03:06:29.575199    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
	I0114 03:06:29.575309    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
	I0114 03:06:29.575407    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
	I0114 03:06:29.575504    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
	I0114 03:06:29.575647    9157 main.go:134] libmachine: Using SSH client type: native
	I0114 03:06:29.575756    9157 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 192.168.64.24 22 <nil> <nil>}
	I0114 03:06:29.575765    9157 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0114 03:06:29.651864    9157 main.go:134] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0114 03:06:29.651886    9157 buildroot.go:70] root file system type: tmpfs
	I0114 03:06:29.652052    9157 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0114 03:06:29.652086    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
	I0114 03:06:29.652231    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
	I0114 03:06:29.652330    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
	I0114 03:06:29.652418    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
	I0114 03:06:29.652523    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
	I0114 03:06:29.652668    9157 main.go:134] libmachine: Using SSH client type: native
	I0114 03:06:29.652795    9157 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 192.168.64.24 22 <nil> <nil>}
	I0114 03:06:29.652844    9157 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0114 03:06:29.737555    9157 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0114 03:06:29.737583    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
	I0114 03:06:29.737711    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
	I0114 03:06:29.737816    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
	I0114 03:06:29.737910    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
	I0114 03:06:29.737994    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
	I0114 03:06:29.738154    9157 main.go:134] libmachine: Using SSH client type: native
	I0114 03:06:29.738285    9157 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 192.168.64.24 22 <nil> <nil>}
	I0114 03:06:29.738299    9157 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0114 03:06:29.818542    9157 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 03:06:29.818555    9157 machine.go:91] provisioned docker machine in 630.785986ms
	I0114 03:06:29.818564    9157 start.go:300] post-start starting for "pause-030526" (driver="hyperkit")
	I0114 03:06:29.818569    9157 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 03:06:29.818585    9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
	I0114 03:06:29.818762    9157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 03:06:29.818776    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
	I0114 03:06:29.818863    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
	I0114 03:06:29.818989    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
	I0114 03:06:29.819101    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
	I0114 03:06:29.819212    9157 sshutil.go:53] new ssh client: &{IP:192.168.64.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/pause-030526/id_rsa Username:docker}
	I0114 03:06:29.863589    9157 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 03:06:29.867147    9157 info.go:137] Remote host: Buildroot 2021.02.12
	I0114 03:06:29.867177    9157 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1627/.minikube/addons for local assets ...
	I0114 03:06:29.867293    9157 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1627/.minikube/files for local assets ...
	I0114 03:06:29.867473    9157 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15642-1627/.minikube/files/etc/ssl/certs/29172.pem -> 29172.pem in /etc/ssl/certs
	I0114 03:06:29.867660    9157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 03:06:29.875288    9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/files/etc/ssl/certs/29172.pem --> /etc/ssl/certs/29172.pem (1708 bytes)
	I0114 03:06:29.898311    9157 start.go:303] post-start completed in 79.737622ms
	I0114 03:06:29.898335    9157 fix.go:57] fixHost completed within 784.573643ms
	I0114 03:06:29.898350    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
	I0114 03:06:29.898547    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
	I0114 03:06:29.898686    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
	I0114 03:06:29.898829    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
	I0114 03:06:29.899014    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
	I0114 03:06:29.899206    9157 main.go:134] libmachine: Using SSH client type: native
	I0114 03:06:29.899394    9157 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 192.168.64.24 22 <nil> <nil>}
	I0114 03:06:29.899432    9157 main.go:134] libmachine: About to run SSH command:
	date +%s.%N
	I0114 03:06:29.979697    9157 main.go:134] libmachine: SSH cmd err, output: <nil>: 1673694390.143622494
	
	I0114 03:06:29.979710    9157 fix.go:207] guest clock: 1673694390.143622494
	I0114 03:06:29.979738    9157 fix.go:220] Guest: 2023-01-14 03:06:30.143622494 -0800 PST Remote: 2023-01-14 03:06:29.898338 -0800 PST m=+1.283437140 (delta=245.284494ms)
	I0114 03:06:29.979808    9157 fix.go:191] guest clock delta is within tolerance: 245.284494ms
	I0114 03:06:29.979818    9157 start.go:83] releasing machines lock for "pause-030526", held for 866.080602ms
	I0114 03:06:29.979845    9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
	I0114 03:06:29.980003    9157 main.go:134] libmachine: (pause-030526) Calling .GetIP
	I0114 03:06:29.980117    9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
	I0114 03:06:29.980494    9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
	I0114 03:06:29.980640    9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
	I0114 03:06:29.980740    9157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0114 03:06:29.980793    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
	I0114 03:06:29.980832    9157 ssh_runner.go:195] Run: cat /version.json
	I0114 03:06:29.980844    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
	I0114 03:06:29.980928    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
	I0114 03:06:29.981016    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
	I0114 03:06:29.981118    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
	I0114 03:06:29.981152    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
	I0114 03:06:29.981256    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
	I0114 03:06:29.981307    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
	I0114 03:06:29.981396    9157 sshutil.go:53] new ssh client: &{IP:192.168.64.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/pause-030526/id_rsa Username:docker}
	I0114 03:06:29.981473    9157 sshutil.go:53] new ssh client: &{IP:192.168.64.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/pause-030526/id_rsa Username:docker}
	I0114 03:06:30.027399    9157 ssh_runner.go:195] Run: systemctl --version
	I0114 03:06:30.066000    9157 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 03:06:30.066156    9157 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 03:06:30.089163    9157 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0114 03:06:30.089186    9157 docker.go:543] Images already preloaded, skipping extraction
	I0114 03:06:30.089274    9157 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0114 03:06:30.100249    9157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0114 03:06:30.112389    9157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 03:06:30.124623    9157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 03:06:30.139844    9157 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0114 03:06:30.284861    9157 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0114 03:06:30.434310    9157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 03:06:30.566116    9157 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0114 03:06:47.671669    9157 ssh_runner.go:235] Completed: sudo systemctl restart docker: (17.105631991s)
	I0114 03:06:47.671737    9157 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0114 03:06:47.770672    9157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 03:06:47.869421    9157 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0114 03:06:47.878146    9157 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0114 03:06:47.878278    9157 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0114 03:06:47.881634    9157 start.go:472] Will wait 60s for crictl version
	I0114 03:06:47.882340    9157 ssh_runner.go:195] Run: which crictl
	I0114 03:06:47.884593    9157 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 03:06:47.908327    9157 start.go:488] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.21
	RuntimeApiVersion:  1.41.0
	I0114 03:06:47.908411    9157 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 03:06:47.928147    9157 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 03:06:47.969295    9157 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	I0114 03:06:47.969496    9157 ssh_runner.go:195] Run: grep 192.168.64.1	host.minikube.internal$ /etc/hosts
	I0114 03:06:47.973825    9157 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 03:06:47.973900    9157 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 03:06:47.989568    9157 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0114 03:06:47.989581    9157 docker.go:543] Images already preloaded, skipping extraction
	I0114 03:06:47.989674    9157 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 03:06:48.005387    9157 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0114 03:06:48.005408    9157 cache_images.go:84] Images are preloaded, skipping loading
	I0114 03:06:48.005489    9157 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0114 03:06:48.027097    9157 cni.go:95] Creating CNI manager for ""
	I0114 03:06:48.027111    9157 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 03:06:48.027129    9157 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 03:06:48.027144    9157 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.24 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-030526 NodeName:pause-030526 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.24 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 03:06:48.027235    9157 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.64.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "pause-030526"
	  kubeletExtraArgs:
	    node-ip: 192.168.64.24
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.64.24"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 03:06:48.027301    9157 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-030526 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.24 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:pause-030526 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 03:06:48.027366    9157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0114 03:06:48.033500    9157 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 03:06:48.033554    9157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 03:06:48.039264    9157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (475 bytes)
	I0114 03:06:48.050124    9157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 03:06:48.061033    9157 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2037 bytes)
	I0114 03:06:48.071797    9157 ssh_runner.go:195] Run: grep 192.168.64.24	control-plane.minikube.internal$ /etc/hosts
	I0114 03:06:48.074157    9157 certs.go:54] Setting up /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526 for IP: 192.168.64.24
	I0114 03:06:48.074259    9157 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15642-1627/.minikube/ca.key
	I0114 03:06:48.074312    9157 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15642-1627/.minikube/proxy-client-ca.key
	I0114 03:06:48.074399    9157 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.key
	I0114 03:06:48.074458    9157 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/apiserver.key.098da7d7
	I0114 03:06:48.074508    9157 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/proxy-client.key
	I0114 03:06:48.074730    9157 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/Users/jenkins/minikube-integration/15642-1627/.minikube/certs/2917.pem (1338 bytes)
	W0114 03:06:48.074771    9157 certs.go:384] ignoring /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/Users/jenkins/minikube-integration/15642-1627/.minikube/certs/2917_empty.pem, impossibly tiny 0 bytes
	I0114 03:06:48.074783    9157 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/Users/jenkins/minikube-integration/15642-1627/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 03:06:48.074816    9157 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/Users/jenkins/minikube-integration/15642-1627/.minikube/certs/ca.pem (1082 bytes)
	I0114 03:06:48.074856    9157 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/Users/jenkins/minikube-integration/15642-1627/.minikube/certs/cert.pem (1123 bytes)
	I0114 03:06:48.074893    9157 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/Users/jenkins/minikube-integration/15642-1627/.minikube/certs/key.pem (1679 bytes)
	I0114 03:06:48.074969    9157 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1627/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15642-1627/.minikube/files/etc/ssl/certs/29172.pem (1708 bytes)
	I0114 03:06:48.075495    9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0114 03:06:48.091370    9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0114 03:06:48.107324    9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0114 03:06:48.125396    9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0114 03:06:48.142367    9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 03:06:48.158556    9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0114 03:06:48.174530    9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 03:06:48.190969    9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 03:06:48.206877    9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 03:06:48.222733    9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/2917.pem --> /usr/share/ca-certificates/2917.pem (1338 bytes)
	I0114 03:06:48.238623    9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/files/etc/ssl/certs/29172.pem --> /usr/share/ca-certificates/29172.pem (1708 bytes)
	I0114 03:06:48.254642    9157 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0114 03:06:48.265754    9157 ssh_runner.go:195] Run: openssl version
	I0114 03:06:48.269279    9157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 03:06:48.275922    9157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 03:06:48.278902    9157 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:06 /usr/share/ca-certificates/minikubeCA.pem
	I0114 03:06:48.278944    9157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 03:06:48.282442    9157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 03:06:48.288235    9157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2917.pem && ln -fs /usr/share/ca-certificates/2917.pem /etc/ssl/certs/2917.pem"
	I0114 03:06:48.294874    9157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2917.pem
	I0114 03:06:48.297795    9157 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:10 /usr/share/ca-certificates/2917.pem
	I0114 03:06:48.297840    9157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2917.pem
	I0114 03:06:48.301620    9157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2917.pem /etc/ssl/certs/51391683.0"
	I0114 03:06:48.307666    9157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/29172.pem && ln -fs /usr/share/ca-certificates/29172.pem /etc/ssl/certs/29172.pem"
	I0114 03:06:48.314460    9157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29172.pem
	I0114 03:06:48.317366    9157 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:10 /usr/share/ca-certificates/29172.pem
	I0114 03:06:48.317407    9157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29172.pem
	I0114 03:06:48.321013    9157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/29172.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 03:06:48.326927    9157 kubeadm.go:396] StartCluster: {Name:pause-030526 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.25.3 ClusterName:pause-030526 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.24 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 03:06:48.327060    9157 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0114 03:06:48.342621    9157 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0114 03:06:48.348703    9157 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0114 03:06:48.348718    9157 kubeadm.go:627] restartCluster start
	I0114 03:06:48.348768    9157 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0114 03:06:48.354403    9157 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:06:48.354852    9157 kubeconfig.go:92] found "pause-030526" server: "https://192.168.64.24:8443"
	I0114 03:06:48.355521    9157 kapi.go:59] client config for pause-030526: &rest.Config{Host:"https://192.168.64.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 03:06:48.356029    9157 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0114 03:06:48.361588    9157 api_server.go:165] Checking apiserver status ...
	I0114 03:06:48.361636    9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:06:48.368927    9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:06:48.569934    9157 api_server.go:165] Checking apiserver status ...
	I0114 03:06:48.570084    9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:06:48.579739    9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:06:48.769235    9157 api_server.go:165] Checking apiserver status ...
	I0114 03:06:48.769401    9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:06:48.779223    9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:06:48.969141    9157 api_server.go:165] Checking apiserver status ...
	I0114 03:06:48.969273    9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:06:48.979305    9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:06:49.170586    9157 api_server.go:165] Checking apiserver status ...
	I0114 03:06:49.170746    9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:06:49.180961    9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:06:49.369472    9157 api_server.go:165] Checking apiserver status ...
	I0114 03:06:49.369620    9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:06:49.380092    9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:06:49.569221    9157 api_server.go:165] Checking apiserver status ...
	I0114 03:06:49.569376    9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:06:49.580057    9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:06:49.769559    9157 api_server.go:165] Checking apiserver status ...
	I0114 03:06:49.769720    9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:06:49.780160    9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:06:49.969713    9157 api_server.go:165] Checking apiserver status ...
	I0114 03:06:49.969880    9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:06:49.980690    9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:06:50.169158    9157 api_server.go:165] Checking apiserver status ...
	I0114 03:06:50.169332    9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:06:50.179890    9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:06:50.368999    9157 api_server.go:165] Checking apiserver status ...
	I0114 03:06:50.369102    9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:06:50.377659    9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:06:50.569384    9157 api_server.go:165] Checking apiserver status ...
	I0114 03:06:50.569518    9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:06:50.582430    9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:06:50.770880    9157 api_server.go:165] Checking apiserver status ...
	I0114 03:06:50.770950    9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:06:50.788272    9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:06:50.969676    9157 api_server.go:165] Checking apiserver status ...
	I0114 03:06:50.969758    9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:06:50.989047    9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:06:51.169043    9157 api_server.go:165] Checking apiserver status ...
	I0114 03:06:51.169127    9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:06:51.194165    9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:06:51.369417    9157 api_server.go:165] Checking apiserver status ...
	I0114 03:06:51.369597    9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:06:51.398452    9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:06:51.398462    9157 api_server.go:165] Checking apiserver status ...
	I0114 03:06:51.398521    9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:06:51.420959    9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:06:51.420992    9157 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0114 03:06:51.421002    9157 kubeadm.go:1114] stopping kube-system containers ...
	I0114 03:06:51.421125    9157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0114 03:06:51.477694    9157 docker.go:444] Stopping containers: [a91b8dbf52b2 4ef492042630 1f0472740d8e 8cfdb196b142 ec5b05843edc d1df9d20a995 be1781a847e8 a1988593cada 5d6ae273017b c7561d6051ce 9307465ae584 76689e83a514 848614b4aa6d 8a4cb12efc1e 3e92db3e0bfe 3273458c29fa acf450dad9b0 50be22d755aa 96711f56f8f4 e97d9bd01218 fcb88d3eda1c 92a6ee018993 6e5477b52047 918f5acfd267 e3abad0f6e65 419abd92be6a 36bcd90d5bf6 45b468820501 64c356ff458b 61d5e518bf2c 1fa84458fdde 47fa423a8588]
	I0114 03:06:51.477845    9157 ssh_runner.go:195] Run: docker stop a91b8dbf52b2 4ef492042630 1f0472740d8e 8cfdb196b142 ec5b05843edc d1df9d20a995 be1781a847e8 a1988593cada 5d6ae273017b c7561d6051ce 9307465ae584 76689e83a514 848614b4aa6d 8a4cb12efc1e 3e92db3e0bfe 3273458c29fa acf450dad9b0 50be22d755aa 96711f56f8f4 e97d9bd01218 fcb88d3eda1c 92a6ee018993 6e5477b52047 918f5acfd267 e3abad0f6e65 419abd92be6a 36bcd90d5bf6 45b468820501 64c356ff458b 61d5e518bf2c 1fa84458fdde 47fa423a8588
	I0114 03:07:02.004256    9157 ssh_runner.go:235] Completed: docker stop a91b8dbf52b2 4ef492042630 1f0472740d8e 8cfdb196b142 ec5b05843edc d1df9d20a995 be1781a847e8 a1988593cada 5d6ae273017b c7561d6051ce 9307465ae584 76689e83a514 848614b4aa6d 8a4cb12efc1e 3e92db3e0bfe 3273458c29fa acf450dad9b0 50be22d755aa 96711f56f8f4 e97d9bd01218 fcb88d3eda1c 92a6ee018993 6e5477b52047 918f5acfd267 e3abad0f6e65 419abd92be6a 36bcd90d5bf6 45b468820501 64c356ff458b 61d5e518bf2c 1fa84458fdde 47fa423a8588: (10.526450982s)
	I0114 03:07:02.004319    9157 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0114 03:07:02.058192    9157 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 03:07:02.066713    9157 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan 14 11:05 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jan 14 11:05 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Jan 14 11:06 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jan 14 11:05 /etc/kubernetes/scheduler.conf
	
	I0114 03:07:02.066791    9157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0114 03:07:02.072780    9157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0114 03:07:02.085110    9157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0114 03:07:02.093260    9157 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:07:02.093319    9157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0114 03:07:02.103465    9157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0114 03:07:02.111399    9157 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:07:02.111459    9157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0114 03:07:02.118872    9157 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 03:07:02.124956    9157 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0114 03:07:02.124967    9157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 03:07:02.170978    9157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 03:07:03.258681    9157 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.087691389s)
	I0114 03:07:03.258712    9157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0114 03:07:03.416131    9157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 03:07:03.469041    9157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0114 03:07:03.530234    9157 api_server.go:51] waiting for apiserver process to appear ...
	I0114 03:07:03.530304    9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:07:04.044210    9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:07:04.544110    9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:07:05.042833    9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:07:05.055120    9157 api_server.go:71] duration metric: took 1.524893992s to wait for apiserver process to appear ...
	I0114 03:07:05.055140    9157 api_server.go:87] waiting for apiserver healthz status ...
	I0114 03:07:05.055159    9157 api_server.go:252] Checking apiserver healthz at https://192.168.64.24:8443/healthz ...
	I0114 03:07:07.680217    9157 api_server.go:278] https://192.168.64.24:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0114 03:07:07.680234    9157 api_server.go:102] status: https://192.168.64.24:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0114 03:07:08.181057    9157 api_server.go:252] Checking apiserver healthz at https://192.168.64.24:8443/healthz ...
	I0114 03:07:08.185347    9157 api_server.go:278] https://192.168.64.24:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0114 03:07:08.185807    9157 api_server.go:102] status: https://192.168.64.24:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 03:07:08.680885    9157 api_server.go:252] Checking apiserver healthz at https://192.168.64.24:8443/healthz ...
	I0114 03:07:08.687817    9157 api_server.go:278] https://192.168.64.24:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0114 03:07:08.710794    9157 api_server.go:102] status: https://192.168.64.24:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 03:07:09.180636    9157 api_server.go:252] Checking apiserver healthz at https://192.168.64.24:8443/healthz ...
	I0114 03:07:09.185298    9157 api_server.go:278] https://192.168.64.24:8443/healthz returned 200:
	ok
	I0114 03:07:09.191228    9157 api_server.go:140] control plane version: v1.25.3
	I0114 03:07:09.191238    9157 api_server.go:130] duration metric: took 4.136116498s to wait for apiserver health ...
	I0114 03:07:09.191247    9157 cni.go:95] Creating CNI manager for ""
	I0114 03:07:09.191255    9157 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 03:07:09.191280    9157 system_pods.go:43] waiting for kube-system pods to appear ...
	I0114 03:07:09.196622    9157 system_pods.go:59] 6 kube-system pods found
	I0114 03:07:09.196635    9157 system_pods.go:61] "coredns-565d847f94-wk8g2" [eff0eea5-423e-4f30-9cc7-f0a187ccfbe4] Running
	I0114 03:07:09.196641    9157 system_pods.go:61] "etcd-pause-030526" [79af2b0d-aa88-4651-8d8f-9d70282bb7ea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0114 03:07:09.196646    9157 system_pods.go:61] "kube-apiserver-pause-030526" [d5dc7ee3-a3d5-44c6-8927-5d7689e23ce6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0114 03:07:09.196652    9157 system_pods.go:61] "kube-controller-manager-pause-030526" [80a94c8b-938e-4549-97a9-678b02985b4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0114 03:07:09.196657    9157 system_pods.go:61] "kube-proxy-9lkcj" [937abbd6-9bb6-4df5-bda8-a01348c80cfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0114 03:07:09.196662    9157 system_pods.go:61] "kube-scheduler-pause-030526" [b5e64f69-f421-456a-8e51-0bf0eaf75a8d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0114 03:07:09.196666    9157 system_pods.go:74] duration metric: took 5.37845ms to wait for pod list to return data ...
	I0114 03:07:09.196671    9157 node_conditions.go:102] verifying NodePressure condition ...
	I0114 03:07:09.198952    9157 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0114 03:07:09.198970    9157 node_conditions.go:123] node cpu capacity is 2
	I0114 03:07:09.198980    9157 node_conditions.go:105] duration metric: took 2.305086ms to run NodePressure ...
	I0114 03:07:09.198991    9157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 03:07:09.323365    9157 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0114 03:07:09.326471    9157 kubeadm.go:778] kubelet initialised
	I0114 03:07:09.326482    9157 kubeadm.go:779] duration metric: took 3.103727ms waiting for restarted kubelet to initialise ...
	I0114 03:07:09.326491    9157 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 03:07:09.330104    9157 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-wk8g2" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:09.333713    9157 pod_ready.go:92] pod "coredns-565d847f94-wk8g2" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:09.333722    9157 pod_ready.go:81] duration metric: took 3.606539ms waiting for pod "coredns-565d847f94-wk8g2" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:09.333728    9157 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:11.340950    9157 pod_ready.go:102] pod "etcd-pause-030526" in "kube-system" namespace has status "Ready":"False"
	I0114 03:07:13.841765    9157 pod_ready.go:102] pod "etcd-pause-030526" in "kube-system" namespace has status "Ready":"False"
	I0114 03:07:15.843717    9157 pod_ready.go:102] pod "etcd-pause-030526" in "kube-system" namespace has status "Ready":"False"
	I0114 03:07:18.344136    9157 pod_ready.go:102] pod "etcd-pause-030526" in "kube-system" namespace has status "Ready":"False"
	I0114 03:07:19.342684    9157 pod_ready.go:92] pod "etcd-pause-030526" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:19.342697    9157 pod_ready.go:81] duration metric: took 10.009021387s waiting for pod "etcd-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:19.342705    9157 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:21.351765    9157 pod_ready.go:102] pod "kube-apiserver-pause-030526" in "kube-system" namespace has status "Ready":"False"
	I0114 03:07:23.350513    9157 pod_ready.go:92] pod "kube-apiserver-pause-030526" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:23.350547    9157 pod_ready.go:81] duration metric: took 4.007860495s waiting for pod "kube-apiserver-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.350554    9157 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.353476    9157 pod_ready.go:92] pod "kube-controller-manager-pause-030526" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:23.353485    9157 pod_ready.go:81] duration metric: took 2.925304ms waiting for pod "kube-controller-manager-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.353490    9157 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9lkcj" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.356134    9157 pod_ready.go:92] pod "kube-proxy-9lkcj" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:23.356142    9157 pod_ready.go:81] duration metric: took 2.647244ms waiting for pod "kube-proxy-9lkcj" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.356148    9157 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.358793    9157 pod_ready.go:92] pod "kube-scheduler-pause-030526" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:23.358800    9157 pod_ready.go:81] duration metric: took 2.641458ms waiting for pod "kube-scheduler-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.358804    9157 pod_ready.go:38] duration metric: took 14.032386778s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 03:07:23.358813    9157 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0114 03:07:23.366176    9157 ops.go:34] apiserver oom_adj: -16
	I0114 03:07:23.366186    9157 kubeadm.go:631] restartCluster took 35.017662843s
	I0114 03:07:23.366207    9157 kubeadm.go:398] StartCluster complete in 35.039471935s
	I0114 03:07:23.366217    9157 settings.go:142] acquiring lock: {Name:mk0c64d56bf3ff3479e8fa9f559b4f9cf25d55df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:07:23.366305    9157 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15642-1627/kubeconfig
	I0114 03:07:23.366836    9157 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1627/kubeconfig: {Name:mk9e4b5f5c881bca46b5d9046e1e4e38df78e527 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:07:23.367658    9157 kapi.go:59] client config for pause-030526: &rest.Config{Host:"https://192.168.64.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 03:07:23.369507    9157 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-030526" rescaled to 1
	I0114 03:07:23.369535    9157 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.64.24 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0114 03:07:23.369542    9157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0114 03:07:23.369576    9157 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I0114 03:07:23.369692    9157 config.go:180] Loaded profile config "pause-030526": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 03:07:23.390480    9157 out.go:177] * Verifying Kubernetes components...
	I0114 03:07:23.390629    9157 addons.go:65] Setting storage-provisioner=true in profile "pause-030526"
	I0114 03:07:23.433350    9157 addons.go:227] Setting addon storage-provisioner=true in "pause-030526"
	I0114 03:07:23.390632    9157 addons.go:65] Setting default-storageclass=true in profile "pause-030526"
	W0114 03:07:23.433358    9157 addons.go:236] addon storage-provisioner should already be in state true
	I0114 03:07:23.433392    9157 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-030526"
	I0114 03:07:23.433406    9157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 03:07:23.430373    9157 start.go:813] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0114 03:07:23.433421    9157 host.go:66] Checking if "pause-030526" exists ...
	I0114 03:07:23.433815    9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 03:07:23.433877    9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 03:07:23.433873    9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 03:07:23.433900    9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 03:07:23.442841    9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52806
	I0114 03:07:23.443203    9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52808
	I0114 03:07:23.443537    9157 main.go:134] libmachine: () Calling .GetVersion
	I0114 03:07:23.443728    9157 main.go:134] libmachine: () Calling .GetVersion
	I0114 03:07:23.443899    9157 main.go:134] libmachine: Using API Version  1
	I0114 03:07:23.443908    9157 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 03:07:23.444057    9157 main.go:134] libmachine: Using API Version  1
	I0114 03:07:23.444066    9157 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 03:07:23.444119    9157 main.go:134] libmachine: () Calling .GetMachineName
	I0114 03:07:23.444329    9157 node_ready.go:35] waiting up to 6m0s for node "pause-030526" to be "Ready" ...
	I0114 03:07:23.444380    9157 main.go:134] libmachine: () Calling .GetMachineName
	I0114 03:07:23.444587    9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 03:07:23.444602    9157 main.go:134] libmachine: (pause-030526) Calling .GetState
	I0114 03:07:23.444609    9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 03:07:23.444705    9157 main.go:134] libmachine: (pause-030526) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 03:07:23.445301    9157 main.go:134] libmachine: (pause-030526) DBG | hyperkit pid from json: 8992
	I0114 03:07:23.447147    9157 node_ready.go:49] node "pause-030526" has status "Ready":"True"
	I0114 03:07:23.447164    9157 node_ready.go:38] duration metric: took 2.815218ms waiting for node "pause-030526" to be "Ready" ...
	I0114 03:07:23.447169    9157 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 03:07:23.447225    9157 kapi.go:59] client config for pause-030526: &rest.Config{Host:"https://192.168.64.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 03:07:23.450515    9157 addons.go:227] Setting addon default-storageclass=true in "pause-030526"
	W0114 03:07:23.450531    9157 addons.go:236] addon default-storageclass should already be in state true
	I0114 03:07:23.450551    9157 host.go:66] Checking if "pause-030526" exists ...
	I0114 03:07:23.450887    9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 03:07:23.450912    9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 03:07:23.453524    9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52810
	I0114 03:07:23.454275    9157 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-wk8g2" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.454289    9157 main.go:134] libmachine: () Calling .GetVersion
	I0114 03:07:23.454742    9157 main.go:134] libmachine: Using API Version  1
	I0114 03:07:23.454758    9157 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 03:07:23.455002    9157 main.go:134] libmachine: () Calling .GetMachineName
	I0114 03:07:23.455108    9157 main.go:134] libmachine: (pause-030526) Calling .GetState
	I0114 03:07:23.455188    9157 main.go:134] libmachine: (pause-030526) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 03:07:23.455261    9157 main.go:134] libmachine: (pause-030526) DBG | hyperkit pid from json: 8992
	I0114 03:07:23.456200    9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
	I0114 03:07:23.459195    9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52812
	I0114 03:07:23.477120    9157 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 03:07:23.477524    9157 main.go:134] libmachine: () Calling .GetVersion
	I0114 03:07:23.498347    9157 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 03:07:23.498358    9157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0114 03:07:23.498372    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
	I0114 03:07:23.498499    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
	I0114 03:07:23.498595    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
	I0114 03:07:23.498695    9157 main.go:134] libmachine: Using API Version  1
	I0114 03:07:23.498707    9157 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 03:07:23.498780    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
	I0114 03:07:23.498953    9157 sshutil.go:53] new ssh client: &{IP:192.168.64.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/pause-030526/id_rsa Username:docker}
	I0114 03:07:23.499031    9157 main.go:134] libmachine: () Calling .GetMachineName
	I0114 03:07:23.499602    9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 03:07:23.499665    9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 03:07:23.508249    9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52815
	I0114 03:07:23.508606    9157 main.go:134] libmachine: () Calling .GetVersion
	I0114 03:07:23.509066    9157 main.go:134] libmachine: Using API Version  1
	I0114 03:07:23.509081    9157 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 03:07:23.509378    9157 main.go:134] libmachine: () Calling .GetMachineName
	I0114 03:07:23.509472    9157 main.go:134] libmachine: (pause-030526) Calling .GetState
	I0114 03:07:23.509563    9157 main.go:134] libmachine: (pause-030526) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 03:07:23.509636    9157 main.go:134] libmachine: (pause-030526) DBG | hyperkit pid from json: 8992
	I0114 03:07:23.510952    9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
	I0114 03:07:23.511144    9157 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0114 03:07:23.511152    9157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0114 03:07:23.511161    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
	I0114 03:07:23.511250    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
	I0114 03:07:23.511331    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
	I0114 03:07:23.511433    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
	I0114 03:07:23.511524    9157 sshutil.go:53] new ssh client: &{IP:192.168.64.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/pause-030526/id_rsa Username:docker}
	I0114 03:07:23.553319    9157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 03:07:23.563588    9157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0114 03:07:23.749533    9157 pod_ready.go:92] pod "coredns-565d847f94-wk8g2" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:23.749544    9157 pod_ready.go:81] duration metric: took 295.256786ms waiting for pod "coredns-565d847f94-wk8g2" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.749553    9157 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:24.149706    9157 pod_ready.go:92] pod "etcd-pause-030526" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:24.149731    9157 pod_ready.go:81] duration metric: took 400.160741ms waiting for pod "etcd-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:24.149737    9157 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:24.158190    9157 main.go:134] libmachine: Making call to close driver server
	I0114 03:07:24.158207    9157 main.go:134] libmachine: (pause-030526) Calling .Close
	I0114 03:07:24.158210    9157 main.go:134] libmachine: Making call to close driver server
	I0114 03:07:24.158221    9157 main.go:134] libmachine: (pause-030526) Calling .Close
	I0114 03:07:24.158392    9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
	I0114 03:07:24.158444    9157 main.go:134] libmachine: Successfully made call to close driver server
	I0114 03:07:24.158456    9157 main.go:134] libmachine: Making call to close connection to plugin binary
	I0114 03:07:24.158458    9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
	I0114 03:07:24.158461    9157 main.go:134] libmachine: Successfully made call to close driver server
	I0114 03:07:24.158483    9157 main.go:134] libmachine: Making call to close connection to plugin binary
	I0114 03:07:24.158469    9157 main.go:134] libmachine: Making call to close driver server
	I0114 03:07:24.158502    9157 main.go:134] libmachine: Making call to close driver server
	I0114 03:07:24.158508    9157 main.go:134] libmachine: (pause-030526) Calling .Close
	I0114 03:07:24.158527    9157 main.go:134] libmachine: (pause-030526) Calling .Close
	I0114 03:07:24.158704    9157 main.go:134] libmachine: Successfully made call to close driver server
	I0114 03:07:24.158710    9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
	I0114 03:07:24.158718    9157 main.go:134] libmachine: Making call to close connection to plugin binary
	I0114 03:07:24.158730    9157 main.go:134] libmachine: Successfully made call to close driver server
	I0114 03:07:24.158738    9157 main.go:134] libmachine: Making call to close connection to plugin binary
	I0114 03:07:24.158735    9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
	I0114 03:07:24.158751    9157 main.go:134] libmachine: Making call to close driver server
	I0114 03:07:24.158759    9157 main.go:134] libmachine: (pause-030526) Calling .Close
	I0114 03:07:24.158908    9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
	I0114 03:07:24.159011    9157 main.go:134] libmachine: Successfully made call to close driver server
	I0114 03:07:24.159025    9157 main.go:134] libmachine: Making call to close connection to plugin binary
	I0114 03:07:24.179920    9157 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0114 03:07:24.200426    9157 addons.go:488] enableAddons completed in 830.850832ms
	I0114 03:07:24.550392    9157 pod_ready.go:92] pod "kube-apiserver-pause-030526" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:24.550424    9157 pod_ready.go:81] duration metric: took 400.664842ms waiting for pod "kube-apiserver-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:24.550431    9157 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:24.949214    9157 pod_ready.go:92] pod "kube-controller-manager-pause-030526" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:24.949226    9157 pod_ready.go:81] duration metric: took 398.790966ms waiting for pod "kube-controller-manager-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:24.949237    9157 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9lkcj" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:25.350138    9157 pod_ready.go:92] pod "kube-proxy-9lkcj" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:25.350151    9157 pod_ready.go:81] duration metric: took 400.910872ms waiting for pod "kube-proxy-9lkcj" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:25.350162    9157 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:25.749166    9157 pod_ready.go:92] pod "kube-scheduler-pause-030526" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:25.749177    9157 pod_ready.go:81] duration metric: took 399.012421ms waiting for pod "kube-scheduler-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:25.749184    9157 pod_ready.go:38] duration metric: took 2.302012184s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 03:07:25.749196    9157 api_server.go:51] waiting for apiserver process to appear ...
	I0114 03:07:25.749260    9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:07:25.765950    9157 api_server.go:71] duration metric: took 2.396412835s to wait for apiserver process to appear ...
	I0114 03:07:25.765970    9157 api_server.go:87] waiting for apiserver healthz status ...
	I0114 03:07:25.765977    9157 api_server.go:252] Checking apiserver healthz at https://192.168.64.24:8443/healthz ...
	I0114 03:07:25.772427    9157 api_server.go:278] https://192.168.64.24:8443/healthz returned 200:
	ok
	I0114 03:07:25.772956    9157 api_server.go:140] control plane version: v1.25.3
	I0114 03:07:25.772967    9157 api_server.go:130] duration metric: took 6.991805ms to wait for apiserver health ...
	I0114 03:07:25.772974    9157 system_pods.go:43] waiting for kube-system pods to appear ...
	I0114 03:07:25.950643    9157 system_pods.go:59] 7 kube-system pods found
	I0114 03:07:25.950657    9157 system_pods.go:61] "coredns-565d847f94-wk8g2" [eff0eea5-423e-4f30-9cc7-f0a187ccfbe4] Running
	I0114 03:07:25.950661    9157 system_pods.go:61] "etcd-pause-030526" [79af2b0d-aa88-4651-8d8f-9d70282bb7ea] Running
	I0114 03:07:25.950665    9157 system_pods.go:61] "kube-apiserver-pause-030526" [d5dc7ee3-a3d5-44c6-8927-5d7689e23ce6] Running
	I0114 03:07:25.950678    9157 system_pods.go:61] "kube-controller-manager-pause-030526" [80a94c8b-938e-4549-97a9-678b02985b4d] Running
	I0114 03:07:25.950683    9157 system_pods.go:61] "kube-proxy-9lkcj" [937abbd6-9bb6-4df5-bda8-a01348c80cfa] Running
	I0114 03:07:25.950690    9157 system_pods.go:61] "kube-scheduler-pause-030526" [b5e64f69-f421-456a-8e51-0bf0eaf75a8d] Running
	I0114 03:07:25.950696    9157 system_pods.go:61] "storage-provisioner" [14a8b558-cad1-44aa-8434-e31a93fcc6e0] Running
	I0114 03:07:25.950700    9157 system_pods.go:74] duration metric: took 177.722556ms to wait for pod list to return data ...
	I0114 03:07:25.950706    9157 default_sa.go:34] waiting for default service account to be created ...
	I0114 03:07:26.149504    9157 default_sa.go:45] found service account: "default"
	I0114 03:07:26.149520    9157 default_sa.go:55] duration metric: took 198.806394ms for default service account to be created ...
	I0114 03:07:26.149525    9157 system_pods.go:116] waiting for k8s-apps to be running ...
	I0114 03:07:26.350967    9157 system_pods.go:86] 7 kube-system pods found
	I0114 03:07:26.350980    9157 system_pods.go:89] "coredns-565d847f94-wk8g2" [eff0eea5-423e-4f30-9cc7-f0a187ccfbe4] Running
	I0114 03:07:26.350985    9157 system_pods.go:89] "etcd-pause-030526" [79af2b0d-aa88-4651-8d8f-9d70282bb7ea] Running
	I0114 03:07:26.350988    9157 system_pods.go:89] "kube-apiserver-pause-030526" [d5dc7ee3-a3d5-44c6-8927-5d7689e23ce6] Running
	I0114 03:07:26.350992    9157 system_pods.go:89] "kube-controller-manager-pause-030526" [80a94c8b-938e-4549-97a9-678b02985b4d] Running
	I0114 03:07:26.350999    9157 system_pods.go:89] "kube-proxy-9lkcj" [937abbd6-9bb6-4df5-bda8-a01348c80cfa] Running
	I0114 03:07:26.351005    9157 system_pods.go:89] "kube-scheduler-pause-030526" [b5e64f69-f421-456a-8e51-0bf0eaf75a8d] Running
	I0114 03:07:26.351011    9157 system_pods.go:89] "storage-provisioner" [14a8b558-cad1-44aa-8434-e31a93fcc6e0] Running
	I0114 03:07:26.351017    9157 system_pods.go:126] duration metric: took 201.48912ms to wait for k8s-apps to be running ...
	I0114 03:07:26.351034    9157 system_svc.go:44] waiting for kubelet service to be running ....
	I0114 03:07:26.351110    9157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 03:07:26.360848    9157 system_svc.go:56] duration metric: took 9.811651ms WaitForService to wait for kubelet.
	I0114 03:07:26.360864    9157 kubeadm.go:573] duration metric: took 2.991330205s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0114 03:07:26.360876    9157 node_conditions.go:102] verifying NodePressure condition ...
	I0114 03:07:26.549739    9157 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0114 03:07:26.549755    9157 node_conditions.go:123] node cpu capacity is 2
	I0114 03:07:26.549762    9157 node_conditions.go:105] duration metric: took 188.883983ms to run NodePressure ...
	I0114 03:07:26.549769    9157 start.go:217] waiting for startup goroutines ...
	I0114 03:07:26.550105    9157 ssh_runner.go:195] Run: rm -f paused
	I0114 03:07:26.590700    9157 start.go:536] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
	I0114 03:07:26.635229    9157 out.go:177] * Done! kubectl is now configured to use "pause-030526" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-030526 -n pause-030526
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p pause-030526 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p pause-030526 logs -n 25: (2.896771812s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |------------|--------------------------------|---------------------------|----------|---------|---------------------|---------------------|
	|  Command   |              Args              |          Profile          |   User   | Version |     Start Time      |      End Time       |
	|------------|--------------------------------|---------------------------|----------|---------|---------------------|---------------------|
	| stop       | -p scheduled-stop-025204       | scheduled-stop-025204     | jenkins  | v1.28.0 | 14 Jan 23 02:53 PST |                     |
	|            | --schedule 15s                 |                           |          |         |                     |                     |
	| stop       | -p scheduled-stop-025204       | scheduled-stop-025204     | jenkins  | v1.28.0 | 14 Jan 23 02:53 PST | 14 Jan 23 02:53 PST |
	|            | --schedule 15s                 |                           |          |         |                     |                     |
	| delete     | -p scheduled-stop-025204       | scheduled-stop-025204     | jenkins  | v1.28.0 | 14 Jan 23 02:53 PST | 14 Jan 23 02:53 PST |
	| start      | -p skaffold-025353             | skaffold-025353           | jenkins  | v1.28.0 | 14 Jan 23 02:53 PST | 14 Jan 23 02:54 PST |
	|            | --memory=2600                  |                           |          |         |                     |                     |
	|            | --driver=hyperkit              |                           |          |         |                     |                     |
	| docker-env | --shell none -p                | skaffold-025353           | skaffold | v1.28.0 | 14 Jan 23 02:54 PST | 14 Jan 23 02:54 PST |
	|            | skaffold-025353                |                           |          |         |                     |                     |
	|            | --user=skaffold                |                           |          |         |                     |                     |
	| delete     | -p skaffold-025353             | skaffold-025353           | jenkins  | v1.28.0 | 14 Jan 23 02:55 PST | 14 Jan 23 02:55 PST |
	| start      | -p offline-docker-025507       | offline-docker-025507     | jenkins  | v1.28.0 | 14 Jan 23 02:55 PST | 14 Jan 23 03:02 PST |
	|            | --alsologtostderr -v=1         |                           |          |         |                     |                     |
	|            | --memory=2048 --wait=true      |                           |          |         |                     |                     |
	|            | --driver=hyperkit              |                           |          |         |                     |                     |
	| start      | -p auto-025507 --memory=2048   | auto-025507               | jenkins  | v1.28.0 | 14 Jan 23 02:55 PST | 14 Jan 23 03:02 PST |
	|            | --alsologtostderr              |                           |          |         |                     |                     |
	|            | --wait=true --wait-timeout=5m  |                           |          |         |                     |                     |
	|            | --driver=hyperkit              |                           |          |         |                     |                     |
	| ssh        | -p auto-025507 pgrep -a        | auto-025507               | jenkins  | v1.28.0 | 14 Jan 23 03:02 PST | 14 Jan 23 03:02 PST |
	|            | kubelet                        |                           |          |         |                     |                     |
	| delete     | -p offline-docker-025507       | offline-docker-025507     | jenkins  | v1.28.0 | 14 Jan 23 03:02 PST | 14 Jan 23 03:02 PST |
	| start      | -p kubernetes-upgrade-030216   | kubernetes-upgrade-030216 | jenkins  | v1.28.0 | 14 Jan 23 03:02 PST | 14 Jan 23 03:03 PST |
	|            | --memory=2200                  |                           |          |         |                     |                     |
	|            | --kubernetes-version=v1.16.0   |                           |          |         |                     |                     |
	|            | --alsologtostderr -v=1         |                           |          |         |                     |                     |
	|            | --driver=hyperkit              |                           |          |         |                     |                     |
	| delete     | -p auto-025507                 | auto-025507               | jenkins  | v1.28.0 | 14 Jan 23 03:02 PST | 14 Jan 23 03:02 PST |
	| stop       | -p kubernetes-upgrade-030216   | kubernetes-upgrade-030216 | jenkins  | v1.28.0 | 14 Jan 23 03:03 PST | 14 Jan 23 03:03 PST |
	| start      | -p kubernetes-upgrade-030216   | kubernetes-upgrade-030216 | jenkins  | v1.28.0 | 14 Jan 23 03:03 PST | 14 Jan 23 03:04 PST |
	|            | --memory=2200                  |                           |          |         |                     |                     |
	|            | --kubernetes-version=v1.25.3   |                           |          |         |                     |                     |
	|            | --alsologtostderr -v=1         |                           |          |         |                     |                     |
	|            | --driver=hyperkit              |                           |          |         |                     |                     |
	| start      | -p kubernetes-upgrade-030216   | kubernetes-upgrade-030216 | jenkins  | v1.28.0 | 14 Jan 23 03:04 PST |                     |
	|            | --memory=2200                  |                           |          |         |                     |                     |
	|            | --kubernetes-version=v1.16.0   |                           |          |         |                     |                     |
	|            | --driver=hyperkit              |                           |          |         |                     |                     |
	| start      | -p kubernetes-upgrade-030216   | kubernetes-upgrade-030216 | jenkins  | v1.28.0 | 14 Jan 23 03:04 PST | 14 Jan 23 03:04 PST |
	|            | --memory=2200                  |                           |          |         |                     |                     |
	|            | --kubernetes-version=v1.25.3   |                           |          |         |                     |                     |
	|            | --alsologtostderr -v=1         |                           |          |         |                     |                     |
	|            | --driver=hyperkit              |                           |          |         |                     |                     |
	| start      | -p stopped-upgrade-030226      | stopped-upgrade-030226    | jenkins  | v1.28.0 | 14 Jan 23 03:04 PST | 14 Jan 23 03:05 PST |
	|            | --memory=2200                  |                           |          |         |                     |                     |
	|            | --alsologtostderr -v=1         |                           |          |         |                     |                     |
	|            | --driver=hyperkit              |                           |          |         |                     |                     |
	| delete     | -p kubernetes-upgrade-030216   | kubernetes-upgrade-030216 | jenkins  | v1.28.0 | 14 Jan 23 03:04 PST | 14 Jan 23 03:04 PST |
	| delete     | -p stopped-upgrade-030226      | stopped-upgrade-030226    | jenkins  | v1.28.0 | 14 Jan 23 03:05 PST | 14 Jan 23 03:05 PST |
	| start      | -p pause-030526 --memory=2048  | pause-030526              | jenkins  | v1.28.0 | 14 Jan 23 03:05 PST | 14 Jan 23 03:06 PST |
	|            | --install-addons=false         |                           |          |         |                     |                     |
	|            | --wait=all --driver=hyperkit   |                           |          |         |                     |                     |
	| start      | -p running-upgrade-030435      | running-upgrade-030435    | jenkins  | v1.28.0 | 14 Jan 23 03:06 PST | 14 Jan 23 03:07 PST |
	|            | --memory=2200                  |                           |          |         |                     |                     |
	|            | --alsologtostderr -v=1         |                           |          |         |                     |                     |
	|            | --driver=hyperkit              |                           |          |         |                     |                     |
	| start      | -p pause-030526                | pause-030526              | jenkins  | v1.28.0 | 14 Jan 23 03:06 PST | 14 Jan 23 03:07 PST |
	|            | --alsologtostderr -v=1         |                           |          |         |                     |                     |
	|            | --driver=hyperkit              |                           |          |         |                     |                     |
	| delete     | -p running-upgrade-030435      | running-upgrade-030435    | jenkins  | v1.28.0 | 14 Jan 23 03:07 PST | 14 Jan 23 03:07 PST |
	| start      | -p NoKubernetes-030718         | NoKubernetes-030718       | jenkins  | v1.28.0 | 14 Jan 23 03:07 PST |                     |
	|            | --no-kubernetes                |                           |          |         |                     |                     |
	|            | --kubernetes-version=1.20      |                           |          |         |                     |                     |
	|            | --driver=hyperkit              |                           |          |         |                     |                     |
	| start      | -p NoKubernetes-030718         | NoKubernetes-030718       | jenkins  | v1.28.0 | 14 Jan 23 03:07 PST |                     |
	|            | --driver=hyperkit              |                           |          |         |                     |                     |
	|------------|--------------------------------|---------------------------|----------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 03:07:19
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 03:07:19.019830    9247 out.go:296] Setting OutFile to fd 1 ...
	I0114 03:07:19.020100    9247 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 03:07:19.020104    9247 out.go:309] Setting ErrFile to fd 2...
	I0114 03:07:19.020107    9247 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 03:07:19.020220    9247 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1627/.minikube/bin
	I0114 03:07:19.020722    9247 out.go:303] Setting JSON to false
	I0114 03:07:19.039498    9247 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4012,"bootTime":1673690427,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0114 03:07:19.039605    9247 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 03:07:19.077571    9247 out.go:177] * [NoKubernetes-030718] minikube v1.28.0 on Darwin 13.0.1
	I0114 03:07:19.136784    9247 notify.go:220] Checking for updates...
	I0114 03:07:19.174094    9247 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 03:07:19.232648    9247 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1627/kubeconfig
	I0114 03:07:19.306906    9247 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0114 03:07:19.328006    9247 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 03:07:19.348934    9247 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1627/.minikube
	I0114 03:07:19.370518    9247 config.go:180] Loaded profile config "pause-030526": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 03:07:19.370581    9247 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 03:07:19.398770    9247 out.go:177] * Using the hyperkit driver based on user configuration
	I0114 03:07:19.441028    9247 start.go:294] selected driver: hyperkit
	I0114 03:07:19.441047    9247 start.go:838] validating driver "hyperkit" against <nil>
	I0114 03:07:19.441077    9247 start.go:849] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 03:07:19.441203    9247 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 03:07:19.441421    9247 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15642-1627/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0114 03:07:19.449241    9247 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.28.0
	I0114 03:07:19.452425    9247 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 03:07:19.452438    9247 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0114 03:07:19.452505    9247 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0114 03:07:19.454697    9247 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0114 03:07:19.454829    9247 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0114 03:07:19.454850    9247 cni.go:95] Creating CNI manager for ""
	I0114 03:07:19.454857    9247 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 03:07:19.454867    9247 start_flags.go:319] config:
	{Name:NoKubernetes-030718 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:NoKubernetes-030718 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 03:07:19.454983    9247 iso.go:125] acquiring lock: {Name:mkf812bef4e208b28a360507a7c86d17e208f6c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 03:07:19.497050    9247 out.go:177] * Starting control plane node NoKubernetes-030718 in cluster NoKubernetes-030718
	I0114 03:07:19.518880    9247 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 03:07:19.518967    9247 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15642-1627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0114 03:07:19.518991    9247 cache.go:57] Caching tarball of preloaded images
	I0114 03:07:19.519216    9247 preload.go:174] Found /Users/jenkins/minikube-integration/15642-1627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 03:07:19.519232    9247 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0114 03:07:19.519384    9247 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/NoKubernetes-030718/config.json ...
	I0114 03:07:19.519444    9247 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/NoKubernetes-030718/config.json: {Name:mk5caec35ff8fcf3d9c5465ac05bd2e53369341a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:07:19.519982    9247 cache.go:193] Successfully downloaded all kic artifacts
	I0114 03:07:19.520014    9247 start.go:364] acquiring machines lock for NoKubernetes-030718: {Name:mkd798b4eb4b12534fdc8a3119639005936a788a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0114 03:07:19.520101    9247 start.go:368] acquired machines lock for "NoKubernetes-030718" in 77µs
	I0114 03:07:19.520133    9247 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-030718 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.25.3 ClusterName:NoKubernetes-030718 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0114 03:07:19.520191    9247 start.go:125] createHost starting for "" (driver="hyperkit")
	I0114 03:07:19.342684    9157 pod_ready.go:92] pod "etcd-pause-030526" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:19.342697    9157 pod_ready.go:81] duration metric: took 10.009021387s waiting for pod "etcd-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:19.342705    9157 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:21.351765    9157 pod_ready.go:102] pod "kube-apiserver-pause-030526" in "kube-system" namespace has status "Ready":"False"
	I0114 03:07:23.350513    9157 pod_ready.go:92] pod "kube-apiserver-pause-030526" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:23.350547    9157 pod_ready.go:81] duration metric: took 4.007860495s waiting for pod "kube-apiserver-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.350554    9157 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.353476    9157 pod_ready.go:92] pod "kube-controller-manager-pause-030526" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:23.353485    9157 pod_ready.go:81] duration metric: took 2.925304ms waiting for pod "kube-controller-manager-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.353490    9157 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9lkcj" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.356134    9157 pod_ready.go:92] pod "kube-proxy-9lkcj" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:23.356142    9157 pod_ready.go:81] duration metric: took 2.647244ms waiting for pod "kube-proxy-9lkcj" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.356148    9157 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.358793    9157 pod_ready.go:92] pod "kube-scheduler-pause-030526" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:23.358800    9157 pod_ready.go:81] duration metric: took 2.641458ms waiting for pod "kube-scheduler-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.358804    9157 pod_ready.go:38] duration metric: took 14.032386778s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 03:07:23.358813    9157 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0114 03:07:23.366176    9157 ops.go:34] apiserver oom_adj: -16
	I0114 03:07:23.366186    9157 kubeadm.go:631] restartCluster took 35.017662843s
	I0114 03:07:23.366207    9157 kubeadm.go:398] StartCluster complete in 35.039471935s
	I0114 03:07:23.366217    9157 settings.go:142] acquiring lock: {Name:mk0c64d56bf3ff3479e8fa9f559b4f9cf25d55df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:07:23.366305    9157 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15642-1627/kubeconfig
	I0114 03:07:23.366836    9157 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1627/kubeconfig: {Name:mk9e4b5f5c881bca46b5d9046e1e4e38df78e527 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:07:23.367658    9157 kapi.go:59] client config for pause-030526: &rest.Config{Host:"https://192.168.64.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 03:07:23.369507    9157 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-030526" rescaled to 1
	I0114 03:07:23.369535    9157 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.64.24 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0114 03:07:23.369542    9157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0114 03:07:23.369576    9157 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I0114 03:07:23.369692    9157 config.go:180] Loaded profile config "pause-030526": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 03:07:23.390480    9157 out.go:177] * Verifying Kubernetes components...
	I0114 03:07:23.390629    9157 addons.go:65] Setting storage-provisioner=true in profile "pause-030526"
	I0114 03:07:23.433350    9157 addons.go:227] Setting addon storage-provisioner=true in "pause-030526"
	I0114 03:07:23.390632    9157 addons.go:65] Setting default-storageclass=true in profile "pause-030526"
	W0114 03:07:23.433358    9157 addons.go:236] addon storage-provisioner should already be in state true
	I0114 03:07:23.433392    9157 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-030526"
	I0114 03:07:23.433406    9157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 03:07:23.430373    9157 start.go:813] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0114 03:07:23.433421    9157 host.go:66] Checking if "pause-030526" exists ...
	I0114 03:07:23.433815    9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 03:07:23.433877    9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 03:07:23.433873    9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 03:07:23.433900    9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 03:07:23.442841    9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52806
	I0114 03:07:23.443203    9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52808
	I0114 03:07:23.443537    9157 main.go:134] libmachine: () Calling .GetVersion
	I0114 03:07:23.443728    9157 main.go:134] libmachine: () Calling .GetVersion
	I0114 03:07:23.443899    9157 main.go:134] libmachine: Using API Version  1
	I0114 03:07:23.443908    9157 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 03:07:23.444057    9157 main.go:134] libmachine: Using API Version  1
	I0114 03:07:23.444066    9157 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 03:07:23.444119    9157 main.go:134] libmachine: () Calling .GetMachineName
	I0114 03:07:23.444329    9157 node_ready.go:35] waiting up to 6m0s for node "pause-030526" to be "Ready" ...
	I0114 03:07:23.444380    9157 main.go:134] libmachine: () Calling .GetMachineName
	I0114 03:07:23.444587    9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 03:07:23.444602    9157 main.go:134] libmachine: (pause-030526) Calling .GetState
	I0114 03:07:23.444609    9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 03:07:23.444705    9157 main.go:134] libmachine: (pause-030526) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 03:07:23.445301    9157 main.go:134] libmachine: (pause-030526) DBG | hyperkit pid from json: 8992
	I0114 03:07:23.447147    9157 node_ready.go:49] node "pause-030526" has status "Ready":"True"
	I0114 03:07:23.447164    9157 node_ready.go:38] duration metric: took 2.815218ms waiting for node "pause-030526" to be "Ready" ...
	I0114 03:07:23.447169    9157 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 03:07:23.447225    9157 kapi.go:59] client config for pause-030526: &rest.Config{Host:"https://192.168.64.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 03:07:23.450515    9157 addons.go:227] Setting addon default-storageclass=true in "pause-030526"
	W0114 03:07:23.450531    9157 addons.go:236] addon default-storageclass should already be in state true
	I0114 03:07:23.450551    9157 host.go:66] Checking if "pause-030526" exists ...
	I0114 03:07:23.450887    9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 03:07:23.450912    9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 03:07:23.453524    9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52810
	I0114 03:07:23.454275    9157 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-wk8g2" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.454289    9157 main.go:134] libmachine: () Calling .GetVersion
	I0114 03:07:23.454742    9157 main.go:134] libmachine: Using API Version  1
	I0114 03:07:23.454758    9157 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 03:07:23.455002    9157 main.go:134] libmachine: () Calling .GetMachineName
	I0114 03:07:23.455108    9157 main.go:134] libmachine: (pause-030526) Calling .GetState
	I0114 03:07:23.455188    9157 main.go:134] libmachine: (pause-030526) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 03:07:23.455261    9157 main.go:134] libmachine: (pause-030526) DBG | hyperkit pid from json: 8992
	I0114 03:07:23.456200    9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
	I0114 03:07:23.459195    9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52812
	I0114 03:07:23.477120    9157 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 03:07:23.477524    9157 main.go:134] libmachine: () Calling .GetVersion
	I0114 03:07:23.498347    9157 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 03:07:23.498358    9157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0114 03:07:23.498372    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
	I0114 03:07:23.498499    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
	I0114 03:07:23.498595    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
	I0114 03:07:23.498695    9157 main.go:134] libmachine: Using API Version  1
	I0114 03:07:23.498707    9157 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 03:07:23.498780    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
	I0114 03:07:23.498953    9157 sshutil.go:53] new ssh client: &{IP:192.168.64.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/pause-030526/id_rsa Username:docker}
	I0114 03:07:23.499031    9157 main.go:134] libmachine: () Calling .GetMachineName
	I0114 03:07:23.499602    9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 03:07:23.499665    9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 03:07:23.508249    9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52815
	I0114 03:07:23.508606    9157 main.go:134] libmachine: () Calling .GetVersion
	I0114 03:07:23.509066    9157 main.go:134] libmachine: Using API Version  1
	I0114 03:07:23.509081    9157 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 03:07:23.509378    9157 main.go:134] libmachine: () Calling .GetMachineName
	I0114 03:07:23.509472    9157 main.go:134] libmachine: (pause-030526) Calling .GetState
	I0114 03:07:23.509563    9157 main.go:134] libmachine: (pause-030526) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 03:07:23.509636    9157 main.go:134] libmachine: (pause-030526) DBG | hyperkit pid from json: 8992
	I0114 03:07:23.510952    9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
	I0114 03:07:23.511144    9157 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0114 03:07:23.511152    9157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0114 03:07:23.511161    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
	I0114 03:07:23.511250    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
	I0114 03:07:23.511331    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
	I0114 03:07:23.511433    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
	I0114 03:07:23.511524    9157 sshutil.go:53] new ssh client: &{IP:192.168.64.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/pause-030526/id_rsa Username:docker}
	I0114 03:07:23.553319    9157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 03:07:23.563588    9157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0114 03:07:19.541712    9247 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I0114 03:07:19.542144    9247 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 03:07:19.542218    9247 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 03:07:19.550680    9247 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52804
	I0114 03:07:19.551062    9247 main.go:134] libmachine: () Calling .GetVersion
	I0114 03:07:19.551455    9247 main.go:134] libmachine: Using API Version  1
	I0114 03:07:19.551463    9247 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 03:07:19.551681    9247 main.go:134] libmachine: () Calling .GetMachineName
	I0114 03:07:19.551780    9247 main.go:134] libmachine: (NoKubernetes-030718) Calling .GetMachineName
	I0114 03:07:19.551847    9247 main.go:134] libmachine: (NoKubernetes-030718) Calling .DriverName
	I0114 03:07:19.551975    9247 start.go:159] libmachine.API.Create for "NoKubernetes-030718" (driver="hyperkit")
	I0114 03:07:19.552002    9247 client.go:168] LocalClient.Create starting
	I0114 03:07:19.552038    9247 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/ca.pem
	I0114 03:07:19.552082    9247 main.go:134] libmachine: Decoding PEM data...
	I0114 03:07:19.552098    9247 main.go:134] libmachine: Parsing certificate...
	I0114 03:07:19.552156    9247 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/cert.pem
	I0114 03:07:19.552191    9247 main.go:134] libmachine: Decoding PEM data...
	I0114 03:07:19.552202    9247 main.go:134] libmachine: Parsing certificate...
	I0114 03:07:19.552213    9247 main.go:134] libmachine: Running pre-create checks...
	I0114 03:07:19.552220    9247 main.go:134] libmachine: (NoKubernetes-030718) Calling .PreCreateCheck
	I0114 03:07:19.552322    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 03:07:19.552519    9247 main.go:134] libmachine: (NoKubernetes-030718) Calling .GetConfigRaw
	I0114 03:07:19.552961    9247 main.go:134] libmachine: Creating machine...
	I0114 03:07:19.552966    9247 main.go:134] libmachine: (NoKubernetes-030718) Calling .Create
	I0114 03:07:19.553050    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 03:07:19.553180    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | I0114 03:07:19.553037    9257 common.go:116] Making disk image using store path: /Users/jenkins/minikube-integration/15642-1627/.minikube
	I0114 03:07:19.553267    9247 main.go:134] libmachine: (NoKubernetes-030718) Downloading /Users/jenkins/minikube-integration/15642-1627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15642-1627/.minikube/cache/iso/amd64/minikube-v1.28.0-1668700269-15235-amd64.iso...
	I0114 03:07:19.718817    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | I0114 03:07:19.718695    9257 common.go:123] Creating ssh key: /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/id_rsa...
	I0114 03:07:19.778524    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | I0114 03:07:19.778469    9257 common.go:129] Creating raw disk image: /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/NoKubernetes-030718.rawdisk...
	I0114 03:07:19.778533    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Writing magic tar header
	I0114 03:07:19.778627    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Writing SSH key tar header
	I0114 03:07:19.779210    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | I0114 03:07:19.779155    9257 common.go:143] Fixing permissions on /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718 ...
	I0114 03:07:19.950626    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 03:07:19.950641    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/hyperkit.pid
	I0114 03:07:19.950650    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Using UUID 9cd1b71a-93fb-11ed-97d5-149d997cd0f1
	I0114 03:07:19.972204    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Generated MAC aa:b9:cb:46:9b:fa
	I0114 03:07:19.972218    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=NoKubernetes-030718
	I0114 03:07:19.972248    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9cd1b71a-93fb-11ed-97d5-149d997cd0f1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000182bd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/bzimage", Initrd:"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0114 03:07:19.972285    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9cd1b71a-93fb-11ed-97d5-149d997cd0f1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000182bd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/bzimage", Initrd:"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0114 03:07:19.972374    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/hyperkit.pid", "-c", "2", "-m", "6000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9cd1b71a-93fb-11ed-97d5-149d997cd0f1", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/NoKubernetes-030718.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/tty,log=/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/bzimage,/Users/jenkins/m
inikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=NoKubernetes-030718"}
	I0114 03:07:19.972409    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/hyperkit.pid -c 2 -m 6000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9cd1b71a-93fb-11ed-97d5-149d997cd0f1 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/NoKubernetes-030718.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/tty,log=/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/console-ring -f kexec,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/bzimage,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes
-030718/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=NoKubernetes-030718"
	I0114 03:07:19.972414    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0114 03:07:19.973744    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 DEBUG: hyperkit: Pid is 9258
	I0114 03:07:19.974155    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Attempt 0
	I0114 03:07:19.974164    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 03:07:19.974238    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | hyperkit pid from json: 9258
	I0114 03:07:19.975810    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Searching for aa:b9:cb:46:9b:fa in /var/db/dhcpd_leases ...
	I0114 03:07:19.976175    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0114 03:07:19.976189    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:1a:11:4f:a1:6e:db ID:1,1a:11:4f:a1:6e:db Lease:0x63c3ddfe}
	I0114 03:07:19.976220    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:ce:2c:ac:f7:ed:ae ID:1,ce:2c:ac:f7:ed:ae Lease:0x63c3ddd7}
	I0114 03:07:19.976231    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:96:da:f:12:7c:f2 ID:1,96:da:f:12:7c:f2 Lease:0x63c3ddc3}
	I0114 03:07:19.976241    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:d2:f4:bd:11:dd:76 ID:1,d2:f4:bd:11:dd:76 Lease:0x63c28c42}
	I0114 03:07:19.976250    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:2e:e5:4f:f:5e:6 ID:1,2e:e5:4f:f:5e:6 Lease:0x63c28bc1}
	I0114 03:07:19.976259    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:8e:7b:14:29:f7:c6 ID:1,8e:7b:14:29:f7:c6 Lease:0x63c28bb7}
	I0114 03:07:19.976268    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:6:5b:1:d4:18:92 ID:1,6:5b:1:d4:18:92 Lease:0x63c28b74}
	I0114 03:07:19.976274    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:2a:24:5:87:10:20 ID:1,2a:24:5:87:10:20 Lease:0x63c28a0a}
	I0114 03:07:19.976280    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:96:a:1:d1:48:53 ID:1,96:a:1:d1:48:53 Lease:0x63c289a5}
	I0114 03:07:19.976296    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:ee:43:14:db:7e:45 ID:1,ee:43:14:db:7e:45 Lease:0x63c3da55}
	I0114 03:07:19.976308    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:6e:83:60:c7:cb:4 ID:1,6e:83:60:c7:cb:4 Lease:0x63c288c6}
	I0114 03:07:19.976317    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:22:9a:fb:92:46:f1 ID:1,22:9a:fb:92:46:f1 Lease:0x63c28653}
	I0114 03:07:19.976325    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:de:18:9:d5:68:d6 ID:1,de:18:9:d5:68:d6 Lease:0x63c288cb}
	I0114 03:07:19.976336    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:5e:6f:5:10:ab:29 ID:1,5e:6f:5:10:ab:29 Lease:0x63c288c9}
	I0114 03:07:19.976344    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:26:65:35:f5:e7:2 ID:1,26:65:35:f5:e7:2 Lease:0x63c2820f}
	I0114 03:07:19.976352    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:d6:bb:2b:34:78:1 ID:1,d6:bb:2b:34:78:1 Lease:0x63c281f9}
	I0114 03:07:19.976359    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:32:c3:21:b7:19:cc ID:1,32:c3:21:b7:19:cc Lease:0x63c281d4}
	I0114 03:07:19.976380    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ce:df:1a:3e:3:8a ID:1,ce:df:1a:3e:3:8a Lease:0x63c3d309}
	I0114 03:07:19.976389    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:3e:d3:de:c4:f7:eb ID:1,3e:d3:de:c4:f7:eb Lease:0x63c3d2c8}
	I0114 03:07:19.976395    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:1a:28:1:9c:82:12 ID:1,1a:28:1:9c:82:12 Lease:0x63c3d214}
	I0114 03:07:19.976400    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:92:ab:6d:d7:aa:1e ID:1,92:ab:6d:d7:aa:1e Lease:0x63c3d114}
	I0114 03:07:19.976423    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:5a:4f:b9:38:5f:fe ID:1,5a:4f:b9:38:5f:fe Lease:0x63c27f89}
	I0114 03:07:19.976436    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ba:ae:dd:d2:6:79 ID:1,ba:ae:dd:d2:6:79 Lease:0x63c27f5b}
	I0114 03:07:19.980329    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0114 03:07:19.989638    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0114 03:07:19.990267    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0114 03:07:19.990287    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0114 03:07:19.990297    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0114 03:07:19.990312    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0114 03:07:20.551946    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0114 03:07:20.551964    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0114 03:07:20.657062    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0114 03:07:20.657088    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0114 03:07:20.657094    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0114 03:07:20.657104    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0114 03:07:20.657933    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0114 03:07:20.657941    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0114 03:07:21.977330    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Attempt 1
	I0114 03:07:21.977341    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 03:07:21.977398    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | hyperkit pid from json: 9258
	I0114 03:07:21.978146    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Searching for aa:b9:cb:46:9b:fa in /var/db/dhcpd_leases ...
	I0114 03:07:21.978288    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0114 03:07:21.978295    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:1a:11:4f:a1:6e:db ID:1,1a:11:4f:a1:6e:db Lease:0x63c3ddfe}
	I0114 03:07:21.978302    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:ce:2c:ac:f7:ed:ae ID:1,ce:2c:ac:f7:ed:ae Lease:0x63c3ddd7}
	I0114 03:07:21.978329    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:96:da:f:12:7c:f2 ID:1,96:da:f:12:7c:f2 Lease:0x63c3ddc3}
	I0114 03:07:21.978337    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:d2:f4:bd:11:dd:76 ID:1,d2:f4:bd:11:dd:76 Lease:0x63c28c42}
	I0114 03:07:21.978342    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:2e:e5:4f:f:5e:6 ID:1,2e:e5:4f:f:5e:6 Lease:0x63c28bc1}
	I0114 03:07:21.978348    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:8e:7b:14:29:f7:c6 ID:1,8e:7b:14:29:f7:c6 Lease:0x63c28bb7}
	I0114 03:07:21.978368    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:6:5b:1:d4:18:92 ID:1,6:5b:1:d4:18:92 Lease:0x63c28b74}
	I0114 03:07:21.978374    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:2a:24:5:87:10:20 ID:1,2a:24:5:87:10:20 Lease:0x63c28a0a}
	I0114 03:07:21.978398    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:96:a:1:d1:48:53 ID:1,96:a:1:d1:48:53 Lease:0x63c289a5}
	I0114 03:07:21.978413    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:ee:43:14:db:7e:45 ID:1,ee:43:14:db:7e:45 Lease:0x63c3da55}
	I0114 03:07:21.978423    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:6e:83:60:c7:cb:4 ID:1,6e:83:60:c7:cb:4 Lease:0x63c288c6}
	I0114 03:07:21.978433    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:22:9a:fb:92:46:f1 ID:1,22:9a:fb:92:46:f1 Lease:0x63c28653}
	I0114 03:07:21.978438    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:de:18:9:d5:68:d6 ID:1,de:18:9:d5:68:d6 Lease:0x63c288cb}
	I0114 03:07:21.978445    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:5e:6f:5:10:ab:29 ID:1,5e:6f:5:10:ab:29 Lease:0x63c288c9}
	I0114 03:07:21.978490    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:26:65:35:f5:e7:2 ID:1,26:65:35:f5:e7:2 Lease:0x63c2820f}
	I0114 03:07:21.978515    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:d6:bb:2b:34:78:1 ID:1,d6:bb:2b:34:78:1 Lease:0x63c281f9}
	I0114 03:07:21.978525    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:32:c3:21:b7:19:cc ID:1,32:c3:21:b7:19:cc Lease:0x63c281d4}
	I0114 03:07:21.978530    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ce:df:1a:3e:3:8a ID:1,ce:df:1a:3e:3:8a Lease:0x63c3d309}
	I0114 03:07:21.978536    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:3e:d3:de:c4:f7:eb ID:1,3e:d3:de:c4:f7:eb Lease:0x63c3d2c8}
	I0114 03:07:21.978543    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:1a:28:1:9c:82:12 ID:1,1a:28:1:9c:82:12 Lease:0x63c3d214}
	I0114 03:07:21.978549    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:92:ab:6d:d7:aa:1e ID:1,92:ab:6d:d7:aa:1e Lease:0x63c3d114}
	I0114 03:07:21.978555    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:5a:4f:b9:38:5f:fe ID:1,5a:4f:b9:38:5f:fe Lease:0x63c27f89}
	I0114 03:07:21.978562    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ba:ae:dd:d2:6:79 ID:1,ba:ae:dd:d2:6:79 Lease:0x63c27f5b}
	I0114 03:07:23.979244    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Attempt 2
	I0114 03:07:23.979260    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 03:07:23.979337    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | hyperkit pid from json: 9258
	I0114 03:07:23.980113    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Searching for aa:b9:cb:46:9b:fa in /var/db/dhcpd_leases ...
	I0114 03:07:23.980187    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0114 03:07:23.980199    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:1a:11:4f:a1:6e:db ID:1,1a:11:4f:a1:6e:db Lease:0x63c3ddfe}
	I0114 03:07:23.980208    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:ce:2c:ac:f7:ed:ae ID:1,ce:2c:ac:f7:ed:ae Lease:0x63c3ddd7}
	I0114 03:07:23.980214    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:96:da:f:12:7c:f2 ID:1,96:da:f:12:7c:f2 Lease:0x63c3ddc3}
	I0114 03:07:23.980233    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:d2:f4:bd:11:dd:76 ID:1,d2:f4:bd:11:dd:76 Lease:0x63c28c42}
	I0114 03:07:23.980241    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:2e:e5:4f:f:5e:6 ID:1,2e:e5:4f:f:5e:6 Lease:0x63c28bc1}
	I0114 03:07:23.980250    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:8e:7b:14:29:f7:c6 ID:1,8e:7b:14:29:f7:c6 Lease:0x63c28bb7}
	I0114 03:07:23.980259    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:6:5b:1:d4:18:92 ID:1,6:5b:1:d4:18:92 Lease:0x63c28b74}
	I0114 03:07:23.980265    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:2a:24:5:87:10:20 ID:1,2a:24:5:87:10:20 Lease:0x63c28a0a}
	I0114 03:07:23.980277    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:96:a:1:d1:48:53 ID:1,96:a:1:d1:48:53 Lease:0x63c289a5}
	I0114 03:07:23.980291    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:ee:43:14:db:7e:45 ID:1,ee:43:14:db:7e:45 Lease:0x63c3da55}
	I0114 03:07:23.980298    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:6e:83:60:c7:cb:4 ID:1,6e:83:60:c7:cb:4 Lease:0x63c288c6}
	I0114 03:07:23.980309    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:22:9a:fb:92:46:f1 ID:1,22:9a:fb:92:46:f1 Lease:0x63c28653}
	I0114 03:07:23.980316    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:de:18:9:d5:68:d6 ID:1,de:18:9:d5:68:d6 Lease:0x63c288cb}
	I0114 03:07:23.980322    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:5e:6f:5:10:ab:29 ID:1,5e:6f:5:10:ab:29 Lease:0x63c288c9}
	I0114 03:07:23.980327    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:26:65:35:f5:e7:2 ID:1,26:65:35:f5:e7:2 Lease:0x63c2820f}
	I0114 03:07:23.980336    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:d6:bb:2b:34:78:1 ID:1,d6:bb:2b:34:78:1 Lease:0x63c281f9}
	I0114 03:07:23.980347    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:32:c3:21:b7:19:cc ID:1,32:c3:21:b7:19:cc Lease:0x63c281d4}
	I0114 03:07:23.980354    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ce:df:1a:3e:3:8a ID:1,ce:df:1a:3e:3:8a Lease:0x63c3d309}
	I0114 03:07:23.980363    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:3e:d3:de:c4:f7:eb ID:1,3e:d3:de:c4:f7:eb Lease:0x63c3d2c8}
	I0114 03:07:23.980369    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:1a:28:1:9c:82:12 ID:1,1a:28:1:9c:82:12 Lease:0x63c3d214}
	I0114 03:07:23.980378    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:92:ab:6d:d7:aa:1e ID:1,92:ab:6d:d7:aa:1e Lease:0x63c3d114}
	I0114 03:07:23.980384    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:5a:4f:b9:38:5f:fe ID:1,5a:4f:b9:38:5f:fe Lease:0x63c27f89}
	I0114 03:07:23.980391    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ba:ae:dd:d2:6:79 ID:1,ba:ae:dd:d2:6:79 Lease:0x63c27f5b}
	I0114 03:07:23.749533    9157 pod_ready.go:92] pod "coredns-565d847f94-wk8g2" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:23.749544    9157 pod_ready.go:81] duration metric: took 295.256786ms waiting for pod "coredns-565d847f94-wk8g2" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.749553    9157 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:24.149706    9157 pod_ready.go:92] pod "etcd-pause-030526" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:24.149731    9157 pod_ready.go:81] duration metric: took 400.160741ms waiting for pod "etcd-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:24.149737    9157 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:24.158190    9157 main.go:134] libmachine: Making call to close driver server
	I0114 03:07:24.158207    9157 main.go:134] libmachine: (pause-030526) Calling .Close
	I0114 03:07:24.158210    9157 main.go:134] libmachine: Making call to close driver server
	I0114 03:07:24.158221    9157 main.go:134] libmachine: (pause-030526) Calling .Close
	I0114 03:07:24.158392    9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
	I0114 03:07:24.158444    9157 main.go:134] libmachine: Successfully made call to close driver server
	I0114 03:07:24.158456    9157 main.go:134] libmachine: Making call to close connection to plugin binary
	I0114 03:07:24.158458    9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
	I0114 03:07:24.158461    9157 main.go:134] libmachine: Successfully made call to close driver server
	I0114 03:07:24.158483    9157 main.go:134] libmachine: Making call to close connection to plugin binary
	I0114 03:07:24.158469    9157 main.go:134] libmachine: Making call to close driver server
	I0114 03:07:24.158502    9157 main.go:134] libmachine: Making call to close driver server
	I0114 03:07:24.158508    9157 main.go:134] libmachine: (pause-030526) Calling .Close
	I0114 03:07:24.158527    9157 main.go:134] libmachine: (pause-030526) Calling .Close
	I0114 03:07:24.158704    9157 main.go:134] libmachine: Successfully made call to close driver server
	I0114 03:07:24.158710    9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
	I0114 03:07:24.158718    9157 main.go:134] libmachine: Making call to close connection to plugin binary
	I0114 03:07:24.158730    9157 main.go:134] libmachine: Successfully made call to close driver server
	I0114 03:07:24.158738    9157 main.go:134] libmachine: Making call to close connection to plugin binary
	I0114 03:07:24.158735    9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
	I0114 03:07:24.158751    9157 main.go:134] libmachine: Making call to close driver server
	I0114 03:07:24.158759    9157 main.go:134] libmachine: (pause-030526) Calling .Close
	I0114 03:07:24.158908    9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
	I0114 03:07:24.159011    9157 main.go:134] libmachine: Successfully made call to close driver server
	I0114 03:07:24.159025    9157 main.go:134] libmachine: Making call to close connection to plugin binary
	I0114 03:07:24.179920    9157 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0114 03:07:24.200426    9157 addons.go:488] enableAddons completed in 830.850832ms
	I0114 03:07:24.550392    9157 pod_ready.go:92] pod "kube-apiserver-pause-030526" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:24.550424    9157 pod_ready.go:81] duration metric: took 400.664842ms waiting for pod "kube-apiserver-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:24.550431    9157 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:24.949214    9157 pod_ready.go:92] pod "kube-controller-manager-pause-030526" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:24.949226    9157 pod_ready.go:81] duration metric: took 398.790966ms waiting for pod "kube-controller-manager-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:24.949237    9157 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9lkcj" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:25.350138    9157 pod_ready.go:92] pod "kube-proxy-9lkcj" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:25.350151    9157 pod_ready.go:81] duration metric: took 400.910872ms waiting for pod "kube-proxy-9lkcj" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:25.350162    9157 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:25.749166    9157 pod_ready.go:92] pod "kube-scheduler-pause-030526" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:25.749177    9157 pod_ready.go:81] duration metric: took 399.012421ms waiting for pod "kube-scheduler-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:25.749184    9157 pod_ready.go:38] duration metric: took 2.302012184s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 03:07:25.749196    9157 api_server.go:51] waiting for apiserver process to appear ...
	I0114 03:07:25.749260    9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:07:25.765950    9157 api_server.go:71] duration metric: took 2.396412835s to wait for apiserver process to appear ...
	I0114 03:07:25.765970    9157 api_server.go:87] waiting for apiserver healthz status ...
	I0114 03:07:25.765977    9157 api_server.go:252] Checking apiserver healthz at https://192.168.64.24:8443/healthz ...
	I0114 03:07:25.772427    9157 api_server.go:278] https://192.168.64.24:8443/healthz returned 200:
	ok
	I0114 03:07:25.772956    9157 api_server.go:140] control plane version: v1.25.3
	I0114 03:07:25.772967    9157 api_server.go:130] duration metric: took 6.991805ms to wait for apiserver health ...
	I0114 03:07:25.772974    9157 system_pods.go:43] waiting for kube-system pods to appear ...
	I0114 03:07:25.950643    9157 system_pods.go:59] 7 kube-system pods found
	I0114 03:07:25.950657    9157 system_pods.go:61] "coredns-565d847f94-wk8g2" [eff0eea5-423e-4f30-9cc7-f0a187ccfbe4] Running
	I0114 03:07:25.950661    9157 system_pods.go:61] "etcd-pause-030526" [79af2b0d-aa88-4651-8d8f-9d70282bb7ea] Running
	I0114 03:07:25.950665    9157 system_pods.go:61] "kube-apiserver-pause-030526" [d5dc7ee3-a3d5-44c6-8927-5d7689e23ce6] Running
	I0114 03:07:25.950678    9157 system_pods.go:61] "kube-controller-manager-pause-030526" [80a94c8b-938e-4549-97a9-678b02985b4d] Running
	I0114 03:07:25.950683    9157 system_pods.go:61] "kube-proxy-9lkcj" [937abbd6-9bb6-4df5-bda8-a01348c80cfa] Running
	I0114 03:07:25.950690    9157 system_pods.go:61] "kube-scheduler-pause-030526" [b5e64f69-f421-456a-8e51-0bf0eaf75a8d] Running
	I0114 03:07:25.950696    9157 system_pods.go:61] "storage-provisioner" [14a8b558-cad1-44aa-8434-e31a93fcc6e0] Running
	I0114 03:07:25.950700    9157 system_pods.go:74] duration metric: took 177.722556ms to wait for pod list to return data ...
	I0114 03:07:25.950706    9157 default_sa.go:34] waiting for default service account to be created ...
	I0114 03:07:26.149504    9157 default_sa.go:45] found service account: "default"
	I0114 03:07:26.149520    9157 default_sa.go:55] duration metric: took 198.806394ms for default service account to be created ...
	I0114 03:07:26.149525    9157 system_pods.go:116] waiting for k8s-apps to be running ...
	I0114 03:07:26.350967    9157 system_pods.go:86] 7 kube-system pods found
	I0114 03:07:26.350980    9157 system_pods.go:89] "coredns-565d847f94-wk8g2" [eff0eea5-423e-4f30-9cc7-f0a187ccfbe4] Running
	I0114 03:07:26.350985    9157 system_pods.go:89] "etcd-pause-030526" [79af2b0d-aa88-4651-8d8f-9d70282bb7ea] Running
	I0114 03:07:26.350988    9157 system_pods.go:89] "kube-apiserver-pause-030526" [d5dc7ee3-a3d5-44c6-8927-5d7689e23ce6] Running
	I0114 03:07:26.350992    9157 system_pods.go:89] "kube-controller-manager-pause-030526" [80a94c8b-938e-4549-97a9-678b02985b4d] Running
	I0114 03:07:26.350999    9157 system_pods.go:89] "kube-proxy-9lkcj" [937abbd6-9bb6-4df5-bda8-a01348c80cfa] Running
	I0114 03:07:26.351005    9157 system_pods.go:89] "kube-scheduler-pause-030526" [b5e64f69-f421-456a-8e51-0bf0eaf75a8d] Running
	I0114 03:07:26.351011    9157 system_pods.go:89] "storage-provisioner" [14a8b558-cad1-44aa-8434-e31a93fcc6e0] Running
	I0114 03:07:26.351017    9157 system_pods.go:126] duration metric: took 201.48912ms to wait for k8s-apps to be running ...
	I0114 03:07:26.351034    9157 system_svc.go:44] waiting for kubelet service to be running ....
	I0114 03:07:26.351110    9157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 03:07:26.360848    9157 system_svc.go:56] duration metric: took 9.811651ms WaitForService to wait for kubelet.
	I0114 03:07:26.360864    9157 kubeadm.go:573] duration metric: took 2.991330205s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0114 03:07:26.360876    9157 node_conditions.go:102] verifying NodePressure condition ...
	I0114 03:07:26.549739    9157 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0114 03:07:26.549755    9157 node_conditions.go:123] node cpu capacity is 2
	I0114 03:07:26.549762    9157 node_conditions.go:105] duration metric: took 188.883983ms to run NodePressure ...
	I0114 03:07:26.549769    9157 start.go:217] waiting for startup goroutines ...
	I0114 03:07:26.550105    9157 ssh_runner.go:195] Run: rm -f paused
	I0114 03:07:26.590700    9157 start.go:536] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
	I0114 03:07:26.635229    9157 out.go:177] * Done! kubectl is now configured to use "pause-030526" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-01-14 11:05:33 UTC, ends at Sat 2023-01-14 11:07:27 UTC. --
	Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.331007077Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/00896af5ccd623a628f391767307c1a9d45e32343eddc996b752a9c7139727f6 pid=6084 runtime=io.containerd.runc.v2
	Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.333349197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.333448259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.333458160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.333734685Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/64b687a4b262b3705a237a5e8f1c05480509b41de28c1a76e6d5f8534499eed9 pid=6100 runtime=io.containerd.runc.v2
	Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.348340304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.348409628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.348419175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.348574713Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6bf08f44884c29bce8afaaee8a369ca1553b77a2f3f362f87893bed08be8580e pid=6134 runtime=io.containerd.runc.v2
	Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.627815369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.627899169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.627909740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.629711389Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/7a4778602ca817386ceb6b83b0cffa2e4273ed22dec5e1bd6af016c2cdbbc152 pid=6375 runtime=io.containerd.runc.v2
	Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.635505843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.635574619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.635585017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.635881814Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/3fdcdd87125fc45218e55627224d289bb364f4e26591a574d4711c1e2bf755db pid=6391 runtime=io.containerd.runc.v2
	Jan 14 11:07:24 pause-030526 dockerd[3914]: time="2023-01-14T11:07:24.738126883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 14 11:07:24 pause-030526 dockerd[3914]: time="2023-01-14T11:07:24.738274110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 11:07:24 pause-030526 dockerd[3914]: time="2023-01-14T11:07:24.738296575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 11:07:24 pause-030526 dockerd[3914]: time="2023-01-14T11:07:24.738419462Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/b511107c0d65ed1187a9182a9b33f82bfbf4fa8cfee81c4ebdc2d2c2fc5ecc42 pid=6710 runtime=io.containerd.runc.v2
	Jan 14 11:07:25 pause-030526 dockerd[3914]: time="2023-01-14T11:07:25.035767688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 14 11:07:25 pause-030526 dockerd[3914]: time="2023-01-14T11:07:25.035869182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 11:07:25 pause-030526 dockerd[3914]: time="2023-01-14T11:07:25.035879278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 11:07:25 pause-030526 dockerd[3914]: time="2023-01-14T11:07:25.036327785Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/4747fe303fd10345c6f83fc3afdd096d34c7cd162e74c11660dbc35198c8c91a pid=6755 runtime=io.containerd.runc.v2
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	4747fe303fd10       6e38f40d628db       3 seconds ago       Running             storage-provisioner       0                   b511107c0d65e
	3fdcdd87125fc       beaaf00edd38a       18 seconds ago      Running             kube-proxy                3                   4e57c85660d83
	7a4778602ca81       5185b96f0becf       18 seconds ago      Running             coredns                   2                   8919d849501d6
	6bf08f44884c2       6d23ec0e8b87e       23 seconds ago      Running             kube-scheduler            3                   687228c21ca63
	64b687a4b262b       6039992312758       23 seconds ago      Running             kube-controller-manager   3                   832b08b9a62e2
	fa0ae81988fe7       0346dbd74bcb9       23 seconds ago      Running             kube-apiserver            3                   ff4b3ee4f8ae5
	00896af5ccd62       a8a176a5d5d69       23 seconds ago      Running             etcd                      3                   ecaeb9f764e75
	a91b8dbf52b28       beaaf00edd38a       36 seconds ago      Created             kube-proxy                2                   be1781a847e83
	4ef492042630b       5185b96f0becf       36 seconds ago      Exited              coredns                   1                   5d6ae273017b7
	1f0472740d8e5       a8a176a5d5d69       36 seconds ago      Exited              etcd                      2                   9307465ae5847
	8cfdb196b1427       6039992312758       36 seconds ago      Exited              kube-controller-manager   2                   c7561d6051ce8
	ec5b05843edc6       0346dbd74bcb9       36 seconds ago      Exited              kube-apiserver            2                   a1988593cada4
	d1df9d20a995d       6d23ec0e8b87e       36 seconds ago      Exited              kube-scheduler            2                   76689e83a5147
	
	* 
	* ==> coredns [4ef492042630] <==
	* [INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7135f430aea492809ab227b028bd16c96f6629e00404d9ec4f44cae029eb3743d1cfe4a9d0cc8fbbd4cfa53556972f2bbf615e7c9e8412e85d290539257166ad
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/errors: 2 8922087648600135430.3435341938167049804. HINFO: dial udp 192.168.64.1:53: connect: network is unreachable
	
	* 
	* ==> coredns [7a4778602ca8] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7135f430aea492809ab227b028bd16c96f6629e00404d9ec4f44cae029eb3743d1cfe4a9d0cc8fbbd4cfa53556972f2bbf615e7c9e8412e85d290539257166ad
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	
	* 
	* ==> describe nodes <==
	* Name:               pause-030526
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-030526
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81
	                    minikube.k8s.io/name=pause-030526
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_14T03_06_03_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Jan 2023 11:06:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-030526
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Jan 2023 11:07:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Jan 2023 11:07:08 +0000   Sat, 14 Jan 2023 11:06:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Jan 2023 11:07:08 +0000   Sat, 14 Jan 2023 11:06:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Jan 2023 11:07:08 +0000   Sat, 14 Jan 2023 11:06:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Jan 2023 11:07:08 +0000   Sat, 14 Jan 2023 11:07:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.64.24
	  Hostname:    pause-030526
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017572Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017572Ki
	  pods:               110
	System Info:
	  Machine ID:                 5158a2f1d68b4728bdca3e981e3d16f1
	  System UUID:                59a511ed-0000-0000-93df-149d997cd0f1
	  Boot ID:                    7071b7f0-575a-4ffd-bad0-919bd7ad3180
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.21
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-565d847f94-wk8g2                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     74s
	  kube-system                 etcd-pause-030526                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         86s
	  kube-system                 kube-apiserver-pause-030526             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-controller-manager-pause-030526    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-proxy-9lkcj                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-scheduler-pause-030526             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 storage-provisioner                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                 From             Message
	  ----    ------                   ----                ----             -------
	  Normal  Starting                 71s                 kube-proxy       
	  Normal  Starting                 18s                 kube-proxy       
	  Normal  Starting                 51s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  100s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  99s (x7 over 100s)  kubelet          Node pause-030526 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s (x6 over 100s)  kubelet          Node pause-030526 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s (x6 over 100s)  kubelet          Node pause-030526 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     86s                 kubelet          Node pause-030526 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  86s                 kubelet          Node pause-030526 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    86s                 kubelet          Node pause-030526 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                86s                 kubelet          Node pause-030526 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  86s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 86s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           75s                 node-controller  Node pause-030526 event: Registered Node pause-030526 in Controller
	  Normal  Starting                 25s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)   kubelet          Node pause-030526 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)   kubelet          Node pause-030526 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)   kubelet          Node pause-030526 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                  node-controller  Node pause-030526 event: Registered Node pause-030526 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.896084] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.842858] systemd-fstab-generator[530]: Ignoring "noauto" for root device
	[  +0.089665] systemd-fstab-generator[541]: Ignoring "noauto" for root device
	[  +5.167104] systemd-fstab-generator[762]: Ignoring "noauto" for root device
	[  +1.234233] kauditd_printk_skb: 16 callbacks suppressed
	[  +0.224985] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[  +0.092006] systemd-fstab-generator[932]: Ignoring "noauto" for root device
	[  +0.090717] systemd-fstab-generator[943]: Ignoring "noauto" for root device
	[  +1.460171] systemd-fstab-generator[1093]: Ignoring "noauto" for root device
	[  +0.081044] systemd-fstab-generator[1104]: Ignoring "noauto" for root device
	[  +2.991024] systemd-fstab-generator[1323]: Ignoring "noauto" for root device
	[  +0.466189] kauditd_printk_skb: 68 callbacks suppressed
	[Jan14 11:06] systemd-fstab-generator[2009]: Ignoring "noauto" for root device
	[ +12.288147] kauditd_printk_skb: 8 callbacks suppressed
	[ +11.014225] kauditd_printk_skb: 18 callbacks suppressed
	[  +4.097840] systemd-fstab-generator[3037]: Ignoring "noauto" for root device
	[  +0.157534] systemd-fstab-generator[3048]: Ignoring "noauto" for root device
	[  +0.143509] systemd-fstab-generator[3059]: Ignoring "noauto" for root device
	[ +17.238898] systemd-fstab-generator[4389]: Ignoring "noauto" for root device
	[  +0.099215] systemd-fstab-generator[4443]: Ignoring "noauto" for root device
	[Jan14 11:07] kauditd_printk_skb: 31 callbacks suppressed
	[  +1.304783] systemd-fstab-generator[5886]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [00896af5ccd6] <==
	* {"level":"info","ts":"2023-01-14T11:07:05.150Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"db97d05830b4a428","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-01-14T11:07:05.150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 switched to configuration voters=(15823344892982371368)"}
	{"level":"info","ts":"2023-01-14T11:07:05.150Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f9c405dda3109066","local-member-id":"db97d05830b4a428","added-peer-id":"db97d05830b4a428","added-peer-peer-urls":["https://192.168.64.24:2380"]}
	{"level":"info","ts":"2023-01-14T11:07:05.151Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f9c405dda3109066","local-member-id":"db97d05830b4a428","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T11:07:05.151Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T11:07:05.157Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"db97d05830b4a428","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2023-01-14T11:07:05.158Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-14T11:07:05.169Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"db97d05830b4a428","initial-advertise-peer-urls":["https://192.168.64.24:2380"],"listen-peer-urls":["https://192.168.64.24:2380"],"advertise-client-urls":["https://192.168.64.24:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.24:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-14T11:07:05.169Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-14T11:07:05.158Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.64.24:2380"}
	{"level":"info","ts":"2023-01-14T11:07:05.170Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.64.24:2380"}
	{"level":"info","ts":"2023-01-14T11:07:06.113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 is starting a new election at term 3"}
	{"level":"info","ts":"2023-01-14T11:07:06.114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-01-14T11:07:06.114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 received MsgPreVoteResp from db97d05830b4a428 at term 3"}
	{"level":"info","ts":"2023-01-14T11:07:06.114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 became candidate at term 4"}
	{"level":"info","ts":"2023-01-14T11:07:06.114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 received MsgVoteResp from db97d05830b4a428 at term 4"}
	{"level":"info","ts":"2023-01-14T11:07:06.114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 became leader at term 4"}
	{"level":"info","ts":"2023-01-14T11:07:06.114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: db97d05830b4a428 elected leader db97d05830b4a428 at term 4"}
	{"level":"info","ts":"2023-01-14T11:07:06.114Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"db97d05830b4a428","local-member-attributes":"{Name:pause-030526 ClientURLs:[https://192.168.64.24:2379]}","request-path":"/0/members/db97d05830b4a428/attributes","cluster-id":"f9c405dda3109066","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-14T11:07:06.114Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T11:07:06.115Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T11:07:06.115Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.64.24:2379"}
	{"level":"info","ts":"2023-01-14T11:07:06.116Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-14T11:07:06.116Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-14T11:07:06.116Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [1f0472740d8e] <==
	* 
	* 
	* ==> kernel <==
	*  11:07:28 up 2 min,  0 users,  load average: 0.60, 0.29, 0.11
	Linux pause-030526 5.10.57 #1 SMP Thu Nov 17 20:18:45 UTC 2022 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [ec5b05843edc] <==
	* 
	* 
	* ==> kube-apiserver [fa0ae81988fe] <==
	* I0114 11:07:07.835235       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0114 11:07:07.835321       1 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller
	I0114 11:07:07.835731       1 autoregister_controller.go:141] Starting autoregister controller
	I0114 11:07:07.835826       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0114 11:07:07.856121       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0114 11:07:07.856531       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0114 11:07:07.858292       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0114 11:07:07.858319       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0114 11:07:07.958455       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0114 11:07:08.030449       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0114 11:07:08.031216       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0114 11:07:08.032075       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0114 11:07:08.032931       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0114 11:07:08.035463       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0114 11:07:08.035912       1 cache.go:39] Caches are synced for autoregister controller
	I0114 11:07:08.037457       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0114 11:07:08.631959       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0114 11:07:08.834884       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0114 11:07:09.433088       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0114 11:07:09.439315       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0114 11:07:09.467659       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0114 11:07:09.481246       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0114 11:07:09.492033       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0114 11:07:20.415432       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0114 11:07:20.584645       1 controller.go:616] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [64b687a4b262] <==
	* I0114 11:07:20.459506       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0114 11:07:20.462354       1 shared_informer.go:262] Caches are synced for expand
	I0114 11:07:20.462369       1 shared_informer.go:262] Caches are synced for namespace
	I0114 11:07:20.462460       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0114 11:07:20.465035       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0114 11:07:20.469843       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0114 11:07:20.469889       1 shared_informer.go:262] Caches are synced for TTL
	I0114 11:07:20.472294       1 shared_informer.go:262] Caches are synced for taint
	I0114 11:07:20.472382       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	I0114 11:07:20.472447       1 taint_manager.go:209] "Sending events to api server"
	I0114 11:07:20.472424       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	W0114 11:07:20.472799       1 node_lifecycle_controller.go:1058] Missing timestamp for Node pause-030526. Assuming now as a timestamp.
	I0114 11:07:20.472929       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I0114 11:07:20.473272       1 event.go:294] "Event occurred" object="pause-030526" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-030526 event: Registered Node pause-030526 in Controller"
	I0114 11:07:20.480533       1 shared_informer.go:262] Caches are synced for daemon sets
	I0114 11:07:20.490072       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0114 11:07:20.496927       1 shared_informer.go:262] Caches are synced for HPA
	I0114 11:07:20.574770       1 shared_informer.go:262] Caches are synced for endpoint
	I0114 11:07:20.589017       1 shared_informer.go:262] Caches are synced for disruption
	I0114 11:07:20.591967       1 shared_informer.go:262] Caches are synced for resource quota
	I0114 11:07:20.598516       1 shared_informer.go:262] Caches are synced for stateful set
	I0114 11:07:20.621015       1 shared_informer.go:262] Caches are synced for resource quota
	I0114 11:07:21.005895       1 shared_informer.go:262] Caches are synced for garbage collector
	I0114 11:07:21.066929       1 shared_informer.go:262] Caches are synced for garbage collector
	I0114 11:07:21.067007       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-controller-manager [8cfdb196b142] <==
	* 
	* 
	* ==> kube-proxy [3fdcdd87125f] <==
	* I0114 11:07:09.764942       1 node.go:163] Successfully retrieved node IP: 192.168.64.24
	I0114 11:07:09.765007       1 server_others.go:138] "Detected node IP" address="192.168.64.24"
	I0114 11:07:09.765022       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0114 11:07:09.789595       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0114 11:07:09.789674       1 server_others.go:206] "Using iptables Proxier"
	I0114 11:07:09.789705       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 11:07:09.789866       1 server.go:661] "Version info" version="v1.25.3"
	I0114 11:07:09.789895       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 11:07:09.791171       1 config.go:317] "Starting service config controller"
	I0114 11:07:09.791204       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0114 11:07:09.791233       1 config.go:226] "Starting endpoint slice config controller"
	I0114 11:07:09.791257       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0114 11:07:09.792096       1 config.go:444] "Starting node config controller"
	I0114 11:07:09.792122       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0114 11:07:09.892056       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0114 11:07:09.892223       1 shared_informer.go:262] Caches are synced for node config
	I0114 11:07:09.892063       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-proxy [a91b8dbf52b2] <==
	* 
	* 
	* ==> kube-scheduler [6bf08f44884c] <==
	* I0114 11:07:05.785495       1 serving.go:348] Generated self-signed cert in-memory
	W0114 11:07:07.930966       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0114 11:07:07.931087       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0114 11:07:07.931149       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0114 11:07:07.931342       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0114 11:07:07.946916       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I0114 11:07:07.946999       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 11:07:07.947953       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0114 11:07:07.948063       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0114 11:07:07.949707       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0114 11:07:07.948083       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0114 11:07:07.964273       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0114 11:07:07.964466       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0114 11:07:07.964647       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0114 11:07:07.964697       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0114 11:07:07.964834       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0114 11:07:07.964957       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0114 11:07:08.050308       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [d1df9d20a995] <==
	* I0114 11:06:52.791744       1 serving.go:348] Generated self-signed cert in-memory
	W0114 11:06:53.280219       1 authentication.go:346] Error looking up in-cluster authentication configuration: Get "https://192.168.64.24:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.64.24:8443: connect: connection refused
	W0114 11:06:53.280234       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0114 11:06:53.280239       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0114 11:06:53.282436       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I0114 11:06:53.282467       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 11:06:53.284472       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0114 11:06:53.284543       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0114 11:06:53.284551       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0114 11:06:53.284723       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0114 11:06:53.284834       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0114 11:06:53.284988       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0114 11:06:53.285446       1 shared_informer.go:258] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0114 11:06:53.285486       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0114 11:06:53.285807       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-01-14 11:05:33 UTC, ends at Sat 2023-01-14 11:07:29 UTC. --
	Jan 14 11:07:07 pause-030526 kubelet[5892]: E0114 11:07:07.304093    5892 kubelet.go:2448] "Error getting node" err="node \"pause-030526\" not found"
	Jan 14 11:07:07 pause-030526 kubelet[5892]: E0114 11:07:07.404883    5892 kubelet.go:2448] "Error getting node" err="node \"pause-030526\" not found"
	Jan 14 11:07:07 pause-030526 kubelet[5892]: E0114 11:07:07.505492    5892 kubelet.go:2448] "Error getting node" err="node \"pause-030526\" not found"
	Jan 14 11:07:07 pause-030526 kubelet[5892]: E0114 11:07:07.606226    5892 kubelet.go:2448] "Error getting node" err="node \"pause-030526\" not found"
	Jan 14 11:07:07 pause-030526 kubelet[5892]: E0114 11:07:07.706862    5892 kubelet.go:2448] "Error getting node" err="node \"pause-030526\" not found"
	Jan 14 11:07:07 pause-030526 kubelet[5892]: E0114 11:07:07.807067    5892 kubelet.go:2448] "Error getting node" err="node \"pause-030526\" not found"
	Jan 14 11:07:07 pause-030526 kubelet[5892]: I0114 11:07:07.908125    5892 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 14 11:07:07 pause-030526 kubelet[5892]: I0114 11:07:07.909327    5892 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.055387    5892 kubelet_node_status.go:108] "Node was previously registered" node="pause-030526"
	Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.055555    5892 kubelet_node_status.go:73] "Successfully registered node" node="pause-030526"
	Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.675696    5892 apiserver.go:52] "Watching apiserver"
	Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.677949    5892 topology_manager.go:205] "Topology Admit Handler"
	Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.677999    5892 topology_manager.go:205] "Topology Admit Handler"
	Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.821148    5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72x7q\" (UniqueName: \"kubernetes.io/projected/eff0eea5-423e-4f30-9cc7-f0a187ccfbe4-kube-api-access-72x7q\") pod \"coredns-565d847f94-wk8g2\" (UID: \"eff0eea5-423e-4f30-9cc7-f0a187ccfbe4\") " pod="kube-system/coredns-565d847f94-wk8g2"
	Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.821518    5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/937abbd6-9bb6-4df5-bda8-a01348c80cfa-kube-proxy\") pod \"kube-proxy-9lkcj\" (UID: \"937abbd6-9bb6-4df5-bda8-a01348c80cfa\") " pod="kube-system/kube-proxy-9lkcj"
	Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.821683    5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eff0eea5-423e-4f30-9cc7-f0a187ccfbe4-config-volume\") pod \"coredns-565d847f94-wk8g2\" (UID: \"eff0eea5-423e-4f30-9cc7-f0a187ccfbe4\") " pod="kube-system/coredns-565d847f94-wk8g2"
	Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.821740    5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/937abbd6-9bb6-4df5-bda8-a01348c80cfa-xtables-lock\") pod \"kube-proxy-9lkcj\" (UID: \"937abbd6-9bb6-4df5-bda8-a01348c80cfa\") " pod="kube-system/kube-proxy-9lkcj"
	Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.821852    5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/937abbd6-9bb6-4df5-bda8-a01348c80cfa-lib-modules\") pod \"kube-proxy-9lkcj\" (UID: \"937abbd6-9bb6-4df5-bda8-a01348c80cfa\") " pod="kube-system/kube-proxy-9lkcj"
	Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.821958    5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmt7j\" (UniqueName: \"kubernetes.io/projected/937abbd6-9bb6-4df5-bda8-a01348c80cfa-kube-api-access-zmt7j\") pod \"kube-proxy-9lkcj\" (UID: \"937abbd6-9bb6-4df5-bda8-a01348c80cfa\") " pod="kube-system/kube-proxy-9lkcj"
	Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.822000    5892 reconciler.go:169] "Reconciler: start to sync state"
	Jan 14 11:07:09 pause-030526 kubelet[5892]: I0114 11:07:09.578961    5892 scope.go:115] "RemoveContainer" containerID="a91b8dbf52b2899bfa63a86f3b29f268678711d37ac71fba7ef99acfabef6696"
	Jan 14 11:07:09 pause-030526 kubelet[5892]: I0114 11:07:09.579108    5892 scope.go:115] "RemoveContainer" containerID="4ef492042630b948c5a7cf8834310194a4c1a14d0407a74904076077074843a0"
	Jan 14 11:07:24 pause-030526 kubelet[5892]: I0114 11:07:24.345531    5892 topology_manager.go:205] "Topology Admit Handler"
	Jan 14 11:07:24 pause-030526 kubelet[5892]: I0114 11:07:24.451536    5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/14a8b558-cad1-44aa-8434-e31a93fcc6e0-tmp\") pod \"storage-provisioner\" (UID: \"14a8b558-cad1-44aa-8434-e31a93fcc6e0\") " pod="kube-system/storage-provisioner"
	Jan 14 11:07:24 pause-030526 kubelet[5892]: I0114 11:07:24.451687    5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjxk6\" (UniqueName: \"kubernetes.io/projected/14a8b558-cad1-44aa-8434-e31a93fcc6e0-kube-api-access-rjxk6\") pod \"storage-provisioner\" (UID: \"14a8b558-cad1-44aa-8434-e31a93fcc6e0\") " pod="kube-system/storage-provisioner"
	
	* 
	* ==> storage-provisioner [4747fe303fd1] <==
	* I0114 11:07:25.092946       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0114 11:07:25.101139       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0114 11:07:25.101183       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0114 11:07:25.105549       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0114 11:07:25.106105       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-030526_0329eba3-6dd1-4234-8e96-6a02360c4ff9!
	I0114 11:07:25.107230       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"578576fd-279f-4e3d-946a-2f8e3400fd7a", APIVersion:"v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-030526_0329eba3-6dd1-4234-8e96-6a02360c4ff9 became leader
	I0114 11:07:25.207006       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-030526_0329eba3-6dd1-4234-8e96-6a02360c4ff9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-030526 -n pause-030526
helpers_test.go:261: (dbg) Run:  kubectl --context pause-030526 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context pause-030526 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-030526 describe pod : exit status 1 (41.000291ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context pause-030526 describe pod : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-030526 -n pause-030526
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p pause-030526 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p pause-030526 logs -n 25: (2.548410029s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |------------|--------------------------------|---------------------------|----------|---------|---------------------|---------------------|
	|  Command   |              Args              |          Profile          |   User   | Version |     Start Time      |      End Time       |
	|------------|--------------------------------|---------------------------|----------|---------|---------------------|---------------------|
	| stop       | -p scheduled-stop-025204       | scheduled-stop-025204     | jenkins  | v1.28.0 | 14 Jan 23 02:53 PST |                     |
	|            | --schedule 15s                 |                           |          |         |                     |                     |
	| stop       | -p scheduled-stop-025204       | scheduled-stop-025204     | jenkins  | v1.28.0 | 14 Jan 23 02:53 PST | 14 Jan 23 02:53 PST |
	|            | --schedule 15s                 |                           |          |         |                     |                     |
	| delete     | -p scheduled-stop-025204       | scheduled-stop-025204     | jenkins  | v1.28.0 | 14 Jan 23 02:53 PST | 14 Jan 23 02:53 PST |
	| start      | -p skaffold-025353             | skaffold-025353           | jenkins  | v1.28.0 | 14 Jan 23 02:53 PST | 14 Jan 23 02:54 PST |
	|            | --memory=2600                  |                           |          |         |                     |                     |
	|            | --driver=hyperkit              |                           |          |         |                     |                     |
	| docker-env | --shell none -p                | skaffold-025353           | skaffold | v1.28.0 | 14 Jan 23 02:54 PST | 14 Jan 23 02:54 PST |
	|            | skaffold-025353                |                           |          |         |                     |                     |
	|            | --user=skaffold                |                           |          |         |                     |                     |
	| delete     | -p skaffold-025353             | skaffold-025353           | jenkins  | v1.28.0 | 14 Jan 23 02:55 PST | 14 Jan 23 02:55 PST |
	| start      | -p offline-docker-025507       | offline-docker-025507     | jenkins  | v1.28.0 | 14 Jan 23 02:55 PST | 14 Jan 23 03:02 PST |
	|            | --alsologtostderr -v=1         |                           |          |         |                     |                     |
	|            | --memory=2048 --wait=true      |                           |          |         |                     |                     |
	|            | --driver=hyperkit              |                           |          |         |                     |                     |
	| start      | -p auto-025507 --memory=2048   | auto-025507               | jenkins  | v1.28.0 | 14 Jan 23 02:55 PST | 14 Jan 23 03:02 PST |
	|            | --alsologtostderr              |                           |          |         |                     |                     |
	|            | --wait=true --wait-timeout=5m  |                           |          |         |                     |                     |
	|            | --driver=hyperkit              |                           |          |         |                     |                     |
	| ssh        | -p auto-025507 pgrep -a        | auto-025507               | jenkins  | v1.28.0 | 14 Jan 23 03:02 PST | 14 Jan 23 03:02 PST |
	|            | kubelet                        |                           |          |         |                     |                     |
	| delete     | -p offline-docker-025507       | offline-docker-025507     | jenkins  | v1.28.0 | 14 Jan 23 03:02 PST | 14 Jan 23 03:02 PST |
	| start      | -p kubernetes-upgrade-030216   | kubernetes-upgrade-030216 | jenkins  | v1.28.0 | 14 Jan 23 03:02 PST | 14 Jan 23 03:03 PST |
	|            | --memory=2200                  |                           |          |         |                     |                     |
	|            | --kubernetes-version=v1.16.0   |                           |          |         |                     |                     |
	|            | --alsologtostderr -v=1         |                           |          |         |                     |                     |
	|            | --driver=hyperkit              |                           |          |         |                     |                     |
	| delete     | -p auto-025507                 | auto-025507               | jenkins  | v1.28.0 | 14 Jan 23 03:02 PST | 14 Jan 23 03:02 PST |
	| stop       | -p kubernetes-upgrade-030216   | kubernetes-upgrade-030216 | jenkins  | v1.28.0 | 14 Jan 23 03:03 PST | 14 Jan 23 03:03 PST |
	| start      | -p kubernetes-upgrade-030216   | kubernetes-upgrade-030216 | jenkins  | v1.28.0 | 14 Jan 23 03:03 PST | 14 Jan 23 03:04 PST |
	|            | --memory=2200                  |                           |          |         |                     |                     |
	|            | --kubernetes-version=v1.25.3   |                           |          |         |                     |                     |
	|            | --alsologtostderr -v=1         |                           |          |         |                     |                     |
	|            | --driver=hyperkit              |                           |          |         |                     |                     |
	| start      | -p kubernetes-upgrade-030216   | kubernetes-upgrade-030216 | jenkins  | v1.28.0 | 14 Jan 23 03:04 PST |                     |
	|            | --memory=2200                  |                           |          |         |                     |                     |
	|            | --kubernetes-version=v1.16.0   |                           |          |         |                     |                     |
	|            | --driver=hyperkit              |                           |          |         |                     |                     |
	| start      | -p kubernetes-upgrade-030216   | kubernetes-upgrade-030216 | jenkins  | v1.28.0 | 14 Jan 23 03:04 PST | 14 Jan 23 03:04 PST |
	|            | --memory=2200                  |                           |          |         |                     |                     |
	|            | --kubernetes-version=v1.25.3   |                           |          |         |                     |                     |
	|            | --alsologtostderr -v=1         |                           |          |         |                     |                     |
	|            | --driver=hyperkit              |                           |          |         |                     |                     |
	| start      | -p stopped-upgrade-030226      | stopped-upgrade-030226    | jenkins  | v1.28.0 | 14 Jan 23 03:04 PST | 14 Jan 23 03:05 PST |
	|            | --memory=2200                  |                           |          |         |                     |                     |
	|            | --alsologtostderr -v=1         |                           |          |         |                     |                     |
	|            | --driver=hyperkit              |                           |          |         |                     |                     |
	| delete     | -p kubernetes-upgrade-030216   | kubernetes-upgrade-030216 | jenkins  | v1.28.0 | 14 Jan 23 03:04 PST | 14 Jan 23 03:04 PST |
	| delete     | -p stopped-upgrade-030226      | stopped-upgrade-030226    | jenkins  | v1.28.0 | 14 Jan 23 03:05 PST | 14 Jan 23 03:05 PST |
	| start      | -p pause-030526 --memory=2048  | pause-030526              | jenkins  | v1.28.0 | 14 Jan 23 03:05 PST | 14 Jan 23 03:06 PST |
	|            | --install-addons=false         |                           |          |         |                     |                     |
	|            | --wait=all --driver=hyperkit   |                           |          |         |                     |                     |
	| start      | -p running-upgrade-030435      | running-upgrade-030435    | jenkins  | v1.28.0 | 14 Jan 23 03:06 PST | 14 Jan 23 03:07 PST |
	|            | --memory=2200                  |                           |          |         |                     |                     |
	|            | --alsologtostderr -v=1         |                           |          |         |                     |                     |
	|            | --driver=hyperkit              |                           |          |         |                     |                     |
	| start      | -p pause-030526                | pause-030526              | jenkins  | v1.28.0 | 14 Jan 23 03:06 PST | 14 Jan 23 03:07 PST |
	|            | --alsologtostderr -v=1         |                           |          |         |                     |                     |
	|            | --driver=hyperkit              |                           |          |         |                     |                     |
	| delete     | -p running-upgrade-030435      | running-upgrade-030435    | jenkins  | v1.28.0 | 14 Jan 23 03:07 PST | 14 Jan 23 03:07 PST |
	| start      | -p NoKubernetes-030718         | NoKubernetes-030718       | jenkins  | v1.28.0 | 14 Jan 23 03:07 PST |                     |
	|            | --no-kubernetes                |                           |          |         |                     |                     |
	|            | --kubernetes-version=1.20      |                           |          |         |                     |                     |
	|            | --driver=hyperkit              |                           |          |         |                     |                     |
	| start      | -p NoKubernetes-030718         | NoKubernetes-030718       | jenkins  | v1.28.0 | 14 Jan 23 03:07 PST |                     |
	|            | --driver=hyperkit              |                           |          |         |                     |                     |
	|------------|--------------------------------|---------------------------|----------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 03:07:19
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 03:07:19.019830    9247 out.go:296] Setting OutFile to fd 1 ...
	I0114 03:07:19.020100    9247 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 03:07:19.020104    9247 out.go:309] Setting ErrFile to fd 2...
	I0114 03:07:19.020107    9247 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 03:07:19.020220    9247 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1627/.minikube/bin
	I0114 03:07:19.020722    9247 out.go:303] Setting JSON to false
	I0114 03:07:19.039498    9247 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4012,"bootTime":1673690427,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0114 03:07:19.039605    9247 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 03:07:19.077571    9247 out.go:177] * [NoKubernetes-030718] minikube v1.28.0 on Darwin 13.0.1
	I0114 03:07:19.136784    9247 notify.go:220] Checking for updates...
	I0114 03:07:19.174094    9247 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 03:07:19.232648    9247 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1627/kubeconfig
	I0114 03:07:19.306906    9247 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0114 03:07:19.328006    9247 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 03:07:19.348934    9247 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1627/.minikube
	I0114 03:07:19.370518    9247 config.go:180] Loaded profile config "pause-030526": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 03:07:19.370581    9247 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 03:07:19.398770    9247 out.go:177] * Using the hyperkit driver based on user configuration
	I0114 03:07:19.441028    9247 start.go:294] selected driver: hyperkit
	I0114 03:07:19.441047    9247 start.go:838] validating driver "hyperkit" against <nil>
	I0114 03:07:19.441077    9247 start.go:849] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 03:07:19.441203    9247 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 03:07:19.441421    9247 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15642-1627/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0114 03:07:19.449241    9247 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.28.0
	I0114 03:07:19.452425    9247 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 03:07:19.452438    9247 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0114 03:07:19.452505    9247 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0114 03:07:19.454697    9247 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0114 03:07:19.454829    9247 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0114 03:07:19.454850    9247 cni.go:95] Creating CNI manager for ""
	I0114 03:07:19.454857    9247 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 03:07:19.454867    9247 start_flags.go:319] config:
	{Name:NoKubernetes-030718 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:NoKubernetes-030718 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 03:07:19.454983    9247 iso.go:125] acquiring lock: {Name:mkf812bef4e208b28a360507a7c86d17e208f6c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 03:07:19.497050    9247 out.go:177] * Starting control plane node NoKubernetes-030718 in cluster NoKubernetes-030718
	I0114 03:07:19.518880    9247 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 03:07:19.518967    9247 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15642-1627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0114 03:07:19.518991    9247 cache.go:57] Caching tarball of preloaded images
	I0114 03:07:19.519216    9247 preload.go:174] Found /Users/jenkins/minikube-integration/15642-1627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 03:07:19.519232    9247 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0114 03:07:19.519384    9247 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/NoKubernetes-030718/config.json ...
	I0114 03:07:19.519444    9247 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/NoKubernetes-030718/config.json: {Name:mk5caec35ff8fcf3d9c5465ac05bd2e53369341a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:07:19.519982    9247 cache.go:193] Successfully downloaded all kic artifacts
	I0114 03:07:19.520014    9247 start.go:364] acquiring machines lock for NoKubernetes-030718: {Name:mkd798b4eb4b12534fdc8a3119639005936a788a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0114 03:07:19.520101    9247 start.go:368] acquired machines lock for "NoKubernetes-030718" in 77µs
	I0114 03:07:19.520133    9247 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-030718 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.25.3 ClusterName:NoKubernetes-030718 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0114 03:07:19.520191    9247 start.go:125] createHost starting for "" (driver="hyperkit")
	I0114 03:07:19.342684    9157 pod_ready.go:92] pod "etcd-pause-030526" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:19.342697    9157 pod_ready.go:81] duration metric: took 10.009021387s waiting for pod "etcd-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:19.342705    9157 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:21.351765    9157 pod_ready.go:102] pod "kube-apiserver-pause-030526" in "kube-system" namespace has status "Ready":"False"
	I0114 03:07:23.350513    9157 pod_ready.go:92] pod "kube-apiserver-pause-030526" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:23.350547    9157 pod_ready.go:81] duration metric: took 4.007860495s waiting for pod "kube-apiserver-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.350554    9157 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.353476    9157 pod_ready.go:92] pod "kube-controller-manager-pause-030526" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:23.353485    9157 pod_ready.go:81] duration metric: took 2.925304ms waiting for pod "kube-controller-manager-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.353490    9157 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9lkcj" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.356134    9157 pod_ready.go:92] pod "kube-proxy-9lkcj" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:23.356142    9157 pod_ready.go:81] duration metric: took 2.647244ms waiting for pod "kube-proxy-9lkcj" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.356148    9157 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.358793    9157 pod_ready.go:92] pod "kube-scheduler-pause-030526" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:23.358800    9157 pod_ready.go:81] duration metric: took 2.641458ms waiting for pod "kube-scheduler-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.358804    9157 pod_ready.go:38] duration metric: took 14.032386778s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 03:07:23.358813    9157 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0114 03:07:23.366176    9157 ops.go:34] apiserver oom_adj: -16
	I0114 03:07:23.366186    9157 kubeadm.go:631] restartCluster took 35.017662843s
	I0114 03:07:23.366207    9157 kubeadm.go:398] StartCluster complete in 35.039471935s
	I0114 03:07:23.366217    9157 settings.go:142] acquiring lock: {Name:mk0c64d56bf3ff3479e8fa9f559b4f9cf25d55df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:07:23.366305    9157 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15642-1627/kubeconfig
	I0114 03:07:23.366836    9157 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1627/kubeconfig: {Name:mk9e4b5f5c881bca46b5d9046e1e4e38df78e527 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:07:23.367658    9157 kapi.go:59] client config for pause-030526: &rest.Config{Host:"https://192.168.64.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 03:07:23.369507    9157 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-030526" rescaled to 1
	I0114 03:07:23.369535    9157 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.64.24 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0114 03:07:23.369542    9157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0114 03:07:23.369576    9157 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I0114 03:07:23.369692    9157 config.go:180] Loaded profile config "pause-030526": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 03:07:23.390480    9157 out.go:177] * Verifying Kubernetes components...
	I0114 03:07:23.390629    9157 addons.go:65] Setting storage-provisioner=true in profile "pause-030526"
	I0114 03:07:23.433350    9157 addons.go:227] Setting addon storage-provisioner=true in "pause-030526"
	I0114 03:07:23.390632    9157 addons.go:65] Setting default-storageclass=true in profile "pause-030526"
	W0114 03:07:23.433358    9157 addons.go:236] addon storage-provisioner should already be in state true
	I0114 03:07:23.433392    9157 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-030526"
	I0114 03:07:23.433406    9157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 03:07:23.430373    9157 start.go:813] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0114 03:07:23.433421    9157 host.go:66] Checking if "pause-030526" exists ...
	I0114 03:07:23.433815    9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 03:07:23.433877    9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 03:07:23.433873    9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 03:07:23.433900    9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 03:07:23.442841    9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52806
	I0114 03:07:23.443203    9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52808
	I0114 03:07:23.443537    9157 main.go:134] libmachine: () Calling .GetVersion
	I0114 03:07:23.443728    9157 main.go:134] libmachine: () Calling .GetVersion
	I0114 03:07:23.443899    9157 main.go:134] libmachine: Using API Version  1
	I0114 03:07:23.443908    9157 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 03:07:23.444057    9157 main.go:134] libmachine: Using API Version  1
	I0114 03:07:23.444066    9157 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 03:07:23.444119    9157 main.go:134] libmachine: () Calling .GetMachineName
	I0114 03:07:23.444329    9157 node_ready.go:35] waiting up to 6m0s for node "pause-030526" to be "Ready" ...
	I0114 03:07:23.444380    9157 main.go:134] libmachine: () Calling .GetMachineName
	I0114 03:07:23.444587    9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 03:07:23.444602    9157 main.go:134] libmachine: (pause-030526) Calling .GetState
	I0114 03:07:23.444609    9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 03:07:23.444705    9157 main.go:134] libmachine: (pause-030526) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 03:07:23.445301    9157 main.go:134] libmachine: (pause-030526) DBG | hyperkit pid from json: 8992
	I0114 03:07:23.447147    9157 node_ready.go:49] node "pause-030526" has status "Ready":"True"
	I0114 03:07:23.447164    9157 node_ready.go:38] duration metric: took 2.815218ms waiting for node "pause-030526" to be "Ready" ...
	I0114 03:07:23.447169    9157 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 03:07:23.447225    9157 kapi.go:59] client config for pause-030526: &rest.Config{Host:"https://192.168.64.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 03:07:23.450515    9157 addons.go:227] Setting addon default-storageclass=true in "pause-030526"
	W0114 03:07:23.450531    9157 addons.go:236] addon default-storageclass should already be in state true
	I0114 03:07:23.450551    9157 host.go:66] Checking if "pause-030526" exists ...
	I0114 03:07:23.450887    9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 03:07:23.450912    9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 03:07:23.453524    9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52810
	I0114 03:07:23.454275    9157 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-wk8g2" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.454289    9157 main.go:134] libmachine: () Calling .GetVersion
	I0114 03:07:23.454742    9157 main.go:134] libmachine: Using API Version  1
	I0114 03:07:23.454758    9157 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 03:07:23.455002    9157 main.go:134] libmachine: () Calling .GetMachineName
	I0114 03:07:23.455108    9157 main.go:134] libmachine: (pause-030526) Calling .GetState
	I0114 03:07:23.455188    9157 main.go:134] libmachine: (pause-030526) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 03:07:23.455261    9157 main.go:134] libmachine: (pause-030526) DBG | hyperkit pid from json: 8992
	I0114 03:07:23.456200    9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
	I0114 03:07:23.459195    9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52812
	I0114 03:07:23.477120    9157 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 03:07:23.477524    9157 main.go:134] libmachine: () Calling .GetVersion
	I0114 03:07:23.498347    9157 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 03:07:23.498358    9157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0114 03:07:23.498372    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
	I0114 03:07:23.498499    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
	I0114 03:07:23.498595    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
	I0114 03:07:23.498695    9157 main.go:134] libmachine: Using API Version  1
	I0114 03:07:23.498707    9157 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 03:07:23.498780    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
	I0114 03:07:23.498953    9157 sshutil.go:53] new ssh client: &{IP:192.168.64.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/pause-030526/id_rsa Username:docker}
	I0114 03:07:23.499031    9157 main.go:134] libmachine: () Calling .GetMachineName
	I0114 03:07:23.499602    9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 03:07:23.499665    9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 03:07:23.508249    9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52815
	I0114 03:07:23.508606    9157 main.go:134] libmachine: () Calling .GetVersion
	I0114 03:07:23.509066    9157 main.go:134] libmachine: Using API Version  1
	I0114 03:07:23.509081    9157 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 03:07:23.509378    9157 main.go:134] libmachine: () Calling .GetMachineName
	I0114 03:07:23.509472    9157 main.go:134] libmachine: (pause-030526) Calling .GetState
	I0114 03:07:23.509563    9157 main.go:134] libmachine: (pause-030526) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 03:07:23.509636    9157 main.go:134] libmachine: (pause-030526) DBG | hyperkit pid from json: 8992
	I0114 03:07:23.510952    9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
	I0114 03:07:23.511144    9157 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0114 03:07:23.511152    9157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0114 03:07:23.511161    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
	I0114 03:07:23.511250    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
	I0114 03:07:23.511331    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
	I0114 03:07:23.511433    9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
	I0114 03:07:23.511524    9157 sshutil.go:53] new ssh client: &{IP:192.168.64.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/pause-030526/id_rsa Username:docker}
	I0114 03:07:23.553319    9157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 03:07:23.563588    9157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0114 03:07:19.541712    9247 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I0114 03:07:19.542144    9247 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 03:07:19.542218    9247 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 03:07:19.550680    9247 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52804
	I0114 03:07:19.551062    9247 main.go:134] libmachine: () Calling .GetVersion
	I0114 03:07:19.551455    9247 main.go:134] libmachine: Using API Version  1
	I0114 03:07:19.551463    9247 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 03:07:19.551681    9247 main.go:134] libmachine: () Calling .GetMachineName
	I0114 03:07:19.551780    9247 main.go:134] libmachine: (NoKubernetes-030718) Calling .GetMachineName
	I0114 03:07:19.551847    9247 main.go:134] libmachine: (NoKubernetes-030718) Calling .DriverName
	I0114 03:07:19.551975    9247 start.go:159] libmachine.API.Create for "NoKubernetes-030718" (driver="hyperkit")
	I0114 03:07:19.552002    9247 client.go:168] LocalClient.Create starting
	I0114 03:07:19.552038    9247 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/ca.pem
	I0114 03:07:19.552082    9247 main.go:134] libmachine: Decoding PEM data...
	I0114 03:07:19.552098    9247 main.go:134] libmachine: Parsing certificate...
	I0114 03:07:19.552156    9247 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/cert.pem
	I0114 03:07:19.552191    9247 main.go:134] libmachine: Decoding PEM data...
	I0114 03:07:19.552202    9247 main.go:134] libmachine: Parsing certificate...
	I0114 03:07:19.552213    9247 main.go:134] libmachine: Running pre-create checks...
	I0114 03:07:19.552220    9247 main.go:134] libmachine: (NoKubernetes-030718) Calling .PreCreateCheck
	I0114 03:07:19.552322    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 03:07:19.552519    9247 main.go:134] libmachine: (NoKubernetes-030718) Calling .GetConfigRaw
	I0114 03:07:19.552961    9247 main.go:134] libmachine: Creating machine...
	I0114 03:07:19.552966    9247 main.go:134] libmachine: (NoKubernetes-030718) Calling .Create
	I0114 03:07:19.553050    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 03:07:19.553180    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | I0114 03:07:19.553037    9257 common.go:116] Making disk image using store path: /Users/jenkins/minikube-integration/15642-1627/.minikube
	I0114 03:07:19.553267    9247 main.go:134] libmachine: (NoKubernetes-030718) Downloading /Users/jenkins/minikube-integration/15642-1627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15642-1627/.minikube/cache/iso/amd64/minikube-v1.28.0-1668700269-15235-amd64.iso...
	I0114 03:07:19.718817    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | I0114 03:07:19.718695    9257 common.go:123] Creating ssh key: /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/id_rsa...
	I0114 03:07:19.778524    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | I0114 03:07:19.778469    9257 common.go:129] Creating raw disk image: /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/NoKubernetes-030718.rawdisk...
	I0114 03:07:19.778533    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Writing magic tar header
	I0114 03:07:19.778627    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Writing SSH key tar header
	I0114 03:07:19.779210    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | I0114 03:07:19.779155    9257 common.go:143] Fixing permissions on /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718 ...
	I0114 03:07:19.950626    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 03:07:19.950641    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/hyperkit.pid
	I0114 03:07:19.950650    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Using UUID 9cd1b71a-93fb-11ed-97d5-149d997cd0f1
	I0114 03:07:19.972204    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Generated MAC aa:b9:cb:46:9b:fa
	I0114 03:07:19.972218    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=NoKubernetes-030718
	I0114 03:07:19.972248    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9cd1b71a-93fb-11ed-97d5-149d997cd0f1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000182bd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/bzimage", Initrd:"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0114 03:07:19.972285    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9cd1b71a-93fb-11ed-97d5-149d997cd0f1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000182bd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/bzimage", Initrd:"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0114 03:07:19.972374    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/hyperkit.pid", "-c", "2", "-m", "6000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9cd1b71a-93fb-11ed-97d5-149d997cd0f1", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/NoKubernetes-030718.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/tty,log=/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/bzimage,/Users/jenkins/m
inikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=NoKubernetes-030718"}
	I0114 03:07:19.972409    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/hyperkit.pid -c 2 -m 6000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9cd1b71a-93fb-11ed-97d5-149d997cd0f1 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/NoKubernetes-030718.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/tty,log=/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/console-ring -f kexec,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/bzimage,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes
-030718/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=NoKubernetes-030718"
	I0114 03:07:19.972414    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0114 03:07:19.973744    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 DEBUG: hyperkit: Pid is 9258
	I0114 03:07:19.974155    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Attempt 0
	I0114 03:07:19.974164    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 03:07:19.974238    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | hyperkit pid from json: 9258
	I0114 03:07:19.975810    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Searching for aa:b9:cb:46:9b:fa in /var/db/dhcpd_leases ...
	I0114 03:07:19.976175    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0114 03:07:19.976189    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:1a:11:4f:a1:6e:db ID:1,1a:11:4f:a1:6e:db Lease:0x63c3ddfe}
	I0114 03:07:19.976220    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:ce:2c:ac:f7:ed:ae ID:1,ce:2c:ac:f7:ed:ae Lease:0x63c3ddd7}
	I0114 03:07:19.976231    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:96:da:f:12:7c:f2 ID:1,96:da:f:12:7c:f2 Lease:0x63c3ddc3}
	I0114 03:07:19.976241    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:d2:f4:bd:11:dd:76 ID:1,d2:f4:bd:11:dd:76 Lease:0x63c28c42}
	I0114 03:07:19.976250    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:2e:e5:4f:f:5e:6 ID:1,2e:e5:4f:f:5e:6 Lease:0x63c28bc1}
	I0114 03:07:19.976259    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:8e:7b:14:29:f7:c6 ID:1,8e:7b:14:29:f7:c6 Lease:0x63c28bb7}
	I0114 03:07:19.976268    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:6:5b:1:d4:18:92 ID:1,6:5b:1:d4:18:92 Lease:0x63c28b74}
	I0114 03:07:19.976274    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:2a:24:5:87:10:20 ID:1,2a:24:5:87:10:20 Lease:0x63c28a0a}
	I0114 03:07:19.976280    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:96:a:1:d1:48:53 ID:1,96:a:1:d1:48:53 Lease:0x63c289a5}
	I0114 03:07:19.976296    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:ee:43:14:db:7e:45 ID:1,ee:43:14:db:7e:45 Lease:0x63c3da55}
	I0114 03:07:19.976308    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:6e:83:60:c7:cb:4 ID:1,6e:83:60:c7:cb:4 Lease:0x63c288c6}
	I0114 03:07:19.976317    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:22:9a:fb:92:46:f1 ID:1,22:9a:fb:92:46:f1 Lease:0x63c28653}
	I0114 03:07:19.976325    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:de:18:9:d5:68:d6 ID:1,de:18:9:d5:68:d6 Lease:0x63c288cb}
	I0114 03:07:19.976336    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:5e:6f:5:10:ab:29 ID:1,5e:6f:5:10:ab:29 Lease:0x63c288c9}
	I0114 03:07:19.976344    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:26:65:35:f5:e7:2 ID:1,26:65:35:f5:e7:2 Lease:0x63c2820f}
	I0114 03:07:19.976352    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:d6:bb:2b:34:78:1 ID:1,d6:bb:2b:34:78:1 Lease:0x63c281f9}
	I0114 03:07:19.976359    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:32:c3:21:b7:19:cc ID:1,32:c3:21:b7:19:cc Lease:0x63c281d4}
	I0114 03:07:19.976380    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ce:df:1a:3e:3:8a ID:1,ce:df:1a:3e:3:8a Lease:0x63c3d309}
	I0114 03:07:19.976389    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:3e:d3:de:c4:f7:eb ID:1,3e:d3:de:c4:f7:eb Lease:0x63c3d2c8}
	I0114 03:07:19.976395    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:1a:28:1:9c:82:12 ID:1,1a:28:1:9c:82:12 Lease:0x63c3d214}
	I0114 03:07:19.976400    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:92:ab:6d:d7:aa:1e ID:1,92:ab:6d:d7:aa:1e Lease:0x63c3d114}
	I0114 03:07:19.976423    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:5a:4f:b9:38:5f:fe ID:1,5a:4f:b9:38:5f:fe Lease:0x63c27f89}
	I0114 03:07:19.976436    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ba:ae:dd:d2:6:79 ID:1,ba:ae:dd:d2:6:79 Lease:0x63c27f5b}
	I0114 03:07:19.980329    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0114 03:07:19.989638    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0114 03:07:19.990267    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0114 03:07:19.990287    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0114 03:07:19.990297    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0114 03:07:19.990312    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0114 03:07:20.551946    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0114 03:07:20.551964    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0114 03:07:20.657062    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0114 03:07:20.657088    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0114 03:07:20.657094    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0114 03:07:20.657104    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0114 03:07:20.657933    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0114 03:07:20.657941    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0114 03:07:21.977330    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Attempt 1
	I0114 03:07:21.977341    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 03:07:21.977398    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | hyperkit pid from json: 9258
	I0114 03:07:21.978146    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Searching for aa:b9:cb:46:9b:fa in /var/db/dhcpd_leases ...
	I0114 03:07:21.978288    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0114 03:07:21.978295    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:1a:11:4f:a1:6e:db ID:1,1a:11:4f:a1:6e:db Lease:0x63c3ddfe}
	I0114 03:07:21.978302    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:ce:2c:ac:f7:ed:ae ID:1,ce:2c:ac:f7:ed:ae Lease:0x63c3ddd7}
	I0114 03:07:21.978329    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:96:da:f:12:7c:f2 ID:1,96:da:f:12:7c:f2 Lease:0x63c3ddc3}
	I0114 03:07:21.978337    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:d2:f4:bd:11:dd:76 ID:1,d2:f4:bd:11:dd:76 Lease:0x63c28c42}
	I0114 03:07:21.978342    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:2e:e5:4f:f:5e:6 ID:1,2e:e5:4f:f:5e:6 Lease:0x63c28bc1}
	I0114 03:07:21.978348    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:8e:7b:14:29:f7:c6 ID:1,8e:7b:14:29:f7:c6 Lease:0x63c28bb7}
	I0114 03:07:21.978368    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:6:5b:1:d4:18:92 ID:1,6:5b:1:d4:18:92 Lease:0x63c28b74}
	I0114 03:07:21.978374    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:2a:24:5:87:10:20 ID:1,2a:24:5:87:10:20 Lease:0x63c28a0a}
	I0114 03:07:21.978398    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:96:a:1:d1:48:53 ID:1,96:a:1:d1:48:53 Lease:0x63c289a5}
	I0114 03:07:21.978413    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:ee:43:14:db:7e:45 ID:1,ee:43:14:db:7e:45 Lease:0x63c3da55}
	I0114 03:07:21.978423    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:6e:83:60:c7:cb:4 ID:1,6e:83:60:c7:cb:4 Lease:0x63c288c6}
	I0114 03:07:21.978433    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:22:9a:fb:92:46:f1 ID:1,22:9a:fb:92:46:f1 Lease:0x63c28653}
	I0114 03:07:21.978438    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:de:18:9:d5:68:d6 ID:1,de:18:9:d5:68:d6 Lease:0x63c288cb}
	I0114 03:07:21.978445    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:5e:6f:5:10:ab:29 ID:1,5e:6f:5:10:ab:29 Lease:0x63c288c9}
	I0114 03:07:21.978490    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:26:65:35:f5:e7:2 ID:1,26:65:35:f5:e7:2 Lease:0x63c2820f}
	I0114 03:07:21.978515    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:d6:bb:2b:34:78:1 ID:1,d6:bb:2b:34:78:1 Lease:0x63c281f9}
	I0114 03:07:21.978525    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:32:c3:21:b7:19:cc ID:1,32:c3:21:b7:19:cc Lease:0x63c281d4}
	I0114 03:07:21.978530    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ce:df:1a:3e:3:8a ID:1,ce:df:1a:3e:3:8a Lease:0x63c3d309}
	I0114 03:07:21.978536    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:3e:d3:de:c4:f7:eb ID:1,3e:d3:de:c4:f7:eb Lease:0x63c3d2c8}
	I0114 03:07:21.978543    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:1a:28:1:9c:82:12 ID:1,1a:28:1:9c:82:12 Lease:0x63c3d214}
	I0114 03:07:21.978549    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:92:ab:6d:d7:aa:1e ID:1,92:ab:6d:d7:aa:1e Lease:0x63c3d114}
	I0114 03:07:21.978555    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:5a:4f:b9:38:5f:fe ID:1,5a:4f:b9:38:5f:fe Lease:0x63c27f89}
	I0114 03:07:21.978562    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ba:ae:dd:d2:6:79 ID:1,ba:ae:dd:d2:6:79 Lease:0x63c27f5b}
	I0114 03:07:23.979244    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Attempt 2
	I0114 03:07:23.979260    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 03:07:23.979337    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | hyperkit pid from json: 9258
	I0114 03:07:23.980113    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Searching for aa:b9:cb:46:9b:fa in /var/db/dhcpd_leases ...
	I0114 03:07:23.980187    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0114 03:07:23.980199    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:1a:11:4f:a1:6e:db ID:1,1a:11:4f:a1:6e:db Lease:0x63c3ddfe}
	I0114 03:07:23.980208    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:ce:2c:ac:f7:ed:ae ID:1,ce:2c:ac:f7:ed:ae Lease:0x63c3ddd7}
	I0114 03:07:23.980214    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:96:da:f:12:7c:f2 ID:1,96:da:f:12:7c:f2 Lease:0x63c3ddc3}
	I0114 03:07:23.980233    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:d2:f4:bd:11:dd:76 ID:1,d2:f4:bd:11:dd:76 Lease:0x63c28c42}
	I0114 03:07:23.980241    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:2e:e5:4f:f:5e:6 ID:1,2e:e5:4f:f:5e:6 Lease:0x63c28bc1}
	I0114 03:07:23.980250    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:8e:7b:14:29:f7:c6 ID:1,8e:7b:14:29:f7:c6 Lease:0x63c28bb7}
	I0114 03:07:23.980259    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:6:5b:1:d4:18:92 ID:1,6:5b:1:d4:18:92 Lease:0x63c28b74}
	I0114 03:07:23.980265    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:2a:24:5:87:10:20 ID:1,2a:24:5:87:10:20 Lease:0x63c28a0a}
	I0114 03:07:23.980277    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:96:a:1:d1:48:53 ID:1,96:a:1:d1:48:53 Lease:0x63c289a5}
	I0114 03:07:23.980291    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:ee:43:14:db:7e:45 ID:1,ee:43:14:db:7e:45 Lease:0x63c3da55}
	I0114 03:07:23.980298    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:6e:83:60:c7:cb:4 ID:1,6e:83:60:c7:cb:4 Lease:0x63c288c6}
	I0114 03:07:23.980309    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:22:9a:fb:92:46:f1 ID:1,22:9a:fb:92:46:f1 Lease:0x63c28653}
	I0114 03:07:23.980316    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:de:18:9:d5:68:d6 ID:1,de:18:9:d5:68:d6 Lease:0x63c288cb}
	I0114 03:07:23.980322    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:5e:6f:5:10:ab:29 ID:1,5e:6f:5:10:ab:29 Lease:0x63c288c9}
	I0114 03:07:23.980327    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:26:65:35:f5:e7:2 ID:1,26:65:35:f5:e7:2 Lease:0x63c2820f}
	I0114 03:07:23.980336    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:d6:bb:2b:34:78:1 ID:1,d6:bb:2b:34:78:1 Lease:0x63c281f9}
	I0114 03:07:23.980347    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:32:c3:21:b7:19:cc ID:1,32:c3:21:b7:19:cc Lease:0x63c281d4}
	I0114 03:07:23.980354    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ce:df:1a:3e:3:8a ID:1,ce:df:1a:3e:3:8a Lease:0x63c3d309}
	I0114 03:07:23.980363    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:3e:d3:de:c4:f7:eb ID:1,3e:d3:de:c4:f7:eb Lease:0x63c3d2c8}
	I0114 03:07:23.980369    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:1a:28:1:9c:82:12 ID:1,1a:28:1:9c:82:12 Lease:0x63c3d214}
	I0114 03:07:23.980378    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:92:ab:6d:d7:aa:1e ID:1,92:ab:6d:d7:aa:1e Lease:0x63c3d114}
	I0114 03:07:23.980384    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:5a:4f:b9:38:5f:fe ID:1,5a:4f:b9:38:5f:fe Lease:0x63c27f89}
	I0114 03:07:23.980391    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ba:ae:dd:d2:6:79 ID:1,ba:ae:dd:d2:6:79 Lease:0x63c27f5b}
	I0114 03:07:23.749533    9157 pod_ready.go:92] pod "coredns-565d847f94-wk8g2" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:23.749544    9157 pod_ready.go:81] duration metric: took 295.256786ms waiting for pod "coredns-565d847f94-wk8g2" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:23.749553    9157 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:24.149706    9157 pod_ready.go:92] pod "etcd-pause-030526" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:24.149731    9157 pod_ready.go:81] duration metric: took 400.160741ms waiting for pod "etcd-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:24.149737    9157 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:24.158190    9157 main.go:134] libmachine: Making call to close driver server
	I0114 03:07:24.158207    9157 main.go:134] libmachine: (pause-030526) Calling .Close
	I0114 03:07:24.158210    9157 main.go:134] libmachine: Making call to close driver server
	I0114 03:07:24.158221    9157 main.go:134] libmachine: (pause-030526) Calling .Close
	I0114 03:07:24.158392    9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
	I0114 03:07:24.158444    9157 main.go:134] libmachine: Successfully made call to close driver server
	I0114 03:07:24.158456    9157 main.go:134] libmachine: Making call to close connection to plugin binary
	I0114 03:07:24.158458    9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
	I0114 03:07:24.158461    9157 main.go:134] libmachine: Successfully made call to close driver server
	I0114 03:07:24.158483    9157 main.go:134] libmachine: Making call to close connection to plugin binary
	I0114 03:07:24.158469    9157 main.go:134] libmachine: Making call to close driver server
	I0114 03:07:24.158502    9157 main.go:134] libmachine: Making call to close driver server
	I0114 03:07:24.158508    9157 main.go:134] libmachine: (pause-030526) Calling .Close
	I0114 03:07:24.158527    9157 main.go:134] libmachine: (pause-030526) Calling .Close
	I0114 03:07:24.158704    9157 main.go:134] libmachine: Successfully made call to close driver server
	I0114 03:07:24.158710    9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
	I0114 03:07:24.158718    9157 main.go:134] libmachine: Making call to close connection to plugin binary
	I0114 03:07:24.158730    9157 main.go:134] libmachine: Successfully made call to close driver server
	I0114 03:07:24.158738    9157 main.go:134] libmachine: Making call to close connection to plugin binary
	I0114 03:07:24.158735    9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
	I0114 03:07:24.158751    9157 main.go:134] libmachine: Making call to close driver server
	I0114 03:07:24.158759    9157 main.go:134] libmachine: (pause-030526) Calling .Close
	I0114 03:07:24.158908    9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
	I0114 03:07:24.159011    9157 main.go:134] libmachine: Successfully made call to close driver server
	I0114 03:07:24.159025    9157 main.go:134] libmachine: Making call to close connection to plugin binary
	I0114 03:07:24.179920    9157 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0114 03:07:24.200426    9157 addons.go:488] enableAddons completed in 830.850832ms
	I0114 03:07:24.550392    9157 pod_ready.go:92] pod "kube-apiserver-pause-030526" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:24.550424    9157 pod_ready.go:81] duration metric: took 400.664842ms waiting for pod "kube-apiserver-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:24.550431    9157 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:24.949214    9157 pod_ready.go:92] pod "kube-controller-manager-pause-030526" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:24.949226    9157 pod_ready.go:81] duration metric: took 398.790966ms waiting for pod "kube-controller-manager-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:24.949237    9157 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9lkcj" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:25.350138    9157 pod_ready.go:92] pod "kube-proxy-9lkcj" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:25.350151    9157 pod_ready.go:81] duration metric: took 400.910872ms waiting for pod "kube-proxy-9lkcj" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:25.350162    9157 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:25.749166    9157 pod_ready.go:92] pod "kube-scheduler-pause-030526" in "kube-system" namespace has status "Ready":"True"
	I0114 03:07:25.749177    9157 pod_ready.go:81] duration metric: took 399.012421ms waiting for pod "kube-scheduler-pause-030526" in "kube-system" namespace to be "Ready" ...
	I0114 03:07:25.749184    9157 pod_ready.go:38] duration metric: took 2.302012184s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 03:07:25.749196    9157 api_server.go:51] waiting for apiserver process to appear ...
	I0114 03:07:25.749260    9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:07:25.765950    9157 api_server.go:71] duration metric: took 2.396412835s to wait for apiserver process to appear ...
	I0114 03:07:25.765970    9157 api_server.go:87] waiting for apiserver healthz status ...
	I0114 03:07:25.765977    9157 api_server.go:252] Checking apiserver healthz at https://192.168.64.24:8443/healthz ...
	I0114 03:07:25.772427    9157 api_server.go:278] https://192.168.64.24:8443/healthz returned 200:
	ok
	I0114 03:07:25.772956    9157 api_server.go:140] control plane version: v1.25.3
	I0114 03:07:25.772967    9157 api_server.go:130] duration metric: took 6.991805ms to wait for apiserver health ...
	I0114 03:07:25.772974    9157 system_pods.go:43] waiting for kube-system pods to appear ...
	I0114 03:07:25.950643    9157 system_pods.go:59] 7 kube-system pods found
	I0114 03:07:25.950657    9157 system_pods.go:61] "coredns-565d847f94-wk8g2" [eff0eea5-423e-4f30-9cc7-f0a187ccfbe4] Running
	I0114 03:07:25.950661    9157 system_pods.go:61] "etcd-pause-030526" [79af2b0d-aa88-4651-8d8f-9d70282bb7ea] Running
	I0114 03:07:25.950665    9157 system_pods.go:61] "kube-apiserver-pause-030526" [d5dc7ee3-a3d5-44c6-8927-5d7689e23ce6] Running
	I0114 03:07:25.950678    9157 system_pods.go:61] "kube-controller-manager-pause-030526" [80a94c8b-938e-4549-97a9-678b02985b4d] Running
	I0114 03:07:25.950683    9157 system_pods.go:61] "kube-proxy-9lkcj" [937abbd6-9bb6-4df5-bda8-a01348c80cfa] Running
	I0114 03:07:25.950690    9157 system_pods.go:61] "kube-scheduler-pause-030526" [b5e64f69-f421-456a-8e51-0bf0eaf75a8d] Running
	I0114 03:07:25.950696    9157 system_pods.go:61] "storage-provisioner" [14a8b558-cad1-44aa-8434-e31a93fcc6e0] Running
	I0114 03:07:25.950700    9157 system_pods.go:74] duration metric: took 177.722556ms to wait for pod list to return data ...
	I0114 03:07:25.950706    9157 default_sa.go:34] waiting for default service account to be created ...
	I0114 03:07:26.149504    9157 default_sa.go:45] found service account: "default"
	I0114 03:07:26.149520    9157 default_sa.go:55] duration metric: took 198.806394ms for default service account to be created ...
	I0114 03:07:26.149525    9157 system_pods.go:116] waiting for k8s-apps to be running ...
	I0114 03:07:26.350967    9157 system_pods.go:86] 7 kube-system pods found
	I0114 03:07:26.350980    9157 system_pods.go:89] "coredns-565d847f94-wk8g2" [eff0eea5-423e-4f30-9cc7-f0a187ccfbe4] Running
	I0114 03:07:26.350985    9157 system_pods.go:89] "etcd-pause-030526" [79af2b0d-aa88-4651-8d8f-9d70282bb7ea] Running
	I0114 03:07:26.350988    9157 system_pods.go:89] "kube-apiserver-pause-030526" [d5dc7ee3-a3d5-44c6-8927-5d7689e23ce6] Running
	I0114 03:07:26.350992    9157 system_pods.go:89] "kube-controller-manager-pause-030526" [80a94c8b-938e-4549-97a9-678b02985b4d] Running
	I0114 03:07:26.350999    9157 system_pods.go:89] "kube-proxy-9lkcj" [937abbd6-9bb6-4df5-bda8-a01348c80cfa] Running
	I0114 03:07:26.351005    9157 system_pods.go:89] "kube-scheduler-pause-030526" [b5e64f69-f421-456a-8e51-0bf0eaf75a8d] Running
	I0114 03:07:26.351011    9157 system_pods.go:89] "storage-provisioner" [14a8b558-cad1-44aa-8434-e31a93fcc6e0] Running
	I0114 03:07:26.351017    9157 system_pods.go:126] duration metric: took 201.48912ms to wait for k8s-apps to be running ...
	I0114 03:07:26.351034    9157 system_svc.go:44] waiting for kubelet service to be running ....
	I0114 03:07:26.351110    9157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 03:07:26.360848    9157 system_svc.go:56] duration metric: took 9.811651ms WaitForService to wait for kubelet.
	I0114 03:07:26.360864    9157 kubeadm.go:573] duration metric: took 2.991330205s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0114 03:07:26.360876    9157 node_conditions.go:102] verifying NodePressure condition ...
	I0114 03:07:26.549739    9157 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0114 03:07:26.549755    9157 node_conditions.go:123] node cpu capacity is 2
	I0114 03:07:26.549762    9157 node_conditions.go:105] duration metric: took 188.883983ms to run NodePressure ...
	I0114 03:07:26.549769    9157 start.go:217] waiting for startup goroutines ...
	I0114 03:07:26.550105    9157 ssh_runner.go:195] Run: rm -f paused
	I0114 03:07:26.590700    9157 start.go:536] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
	I0114 03:07:26.635229    9157 out.go:177] * Done! kubectl is now configured to use "pause-030526" cluster and "default" namespace by default
	I0114 03:07:25.189357    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0114 03:07:25.189405    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0114 03:07:25.189411    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0114 03:07:25.981082    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Attempt 3
	I0114 03:07:25.981090    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 03:07:25.981211    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | hyperkit pid from json: 9258
	I0114 03:07:25.982406    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Searching for aa:b9:cb:46:9b:fa in /var/db/dhcpd_leases ...
	I0114 03:07:25.982467    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0114 03:07:25.982475    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:1a:11:4f:a1:6e:db ID:1,1a:11:4f:a1:6e:db Lease:0x63c3ddfe}
	I0114 03:07:25.982483    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:ce:2c:ac:f7:ed:ae ID:1,ce:2c:ac:f7:ed:ae Lease:0x63c3ddd7}
	I0114 03:07:25.982489    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:96:da:f:12:7c:f2 ID:1,96:da:f:12:7c:f2 Lease:0x63c3ddc3}
	I0114 03:07:25.982494    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:d2:f4:bd:11:dd:76 ID:1,d2:f4:bd:11:dd:76 Lease:0x63c28c42}
	I0114 03:07:25.982499    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:2e:e5:4f:f:5e:6 ID:1,2e:e5:4f:f:5e:6 Lease:0x63c28bc1}
	I0114 03:07:25.982508    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:8e:7b:14:29:f7:c6 ID:1,8e:7b:14:29:f7:c6 Lease:0x63c28bb7}
	I0114 03:07:25.982516    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:6:5b:1:d4:18:92 ID:1,6:5b:1:d4:18:92 Lease:0x63c28b74}
	I0114 03:07:25.982523    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:2a:24:5:87:10:20 ID:1,2a:24:5:87:10:20 Lease:0x63c28a0a}
	I0114 03:07:25.982530    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:96:a:1:d1:48:53 ID:1,96:a:1:d1:48:53 Lease:0x63c289a5}
	I0114 03:07:25.982537    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:ee:43:14:db:7e:45 ID:1,ee:43:14:db:7e:45 Lease:0x63c3da55}
	I0114 03:07:25.982543    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:6e:83:60:c7:cb:4 ID:1,6e:83:60:c7:cb:4 Lease:0x63c288c6}
	I0114 03:07:25.982572    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:22:9a:fb:92:46:f1 ID:1,22:9a:fb:92:46:f1 Lease:0x63c28653}
	I0114 03:07:25.982582    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:de:18:9:d5:68:d6 ID:1,de:18:9:d5:68:d6 Lease:0x63c288cb}
	I0114 03:07:25.982591    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:5e:6f:5:10:ab:29 ID:1,5e:6f:5:10:ab:29 Lease:0x63c288c9}
	I0114 03:07:25.982598    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:26:65:35:f5:e7:2 ID:1,26:65:35:f5:e7:2 Lease:0x63c2820f}
	I0114 03:07:25.982604    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:d6:bb:2b:34:78:1 ID:1,d6:bb:2b:34:78:1 Lease:0x63c281f9}
	I0114 03:07:25.982609    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:32:c3:21:b7:19:cc ID:1,32:c3:21:b7:19:cc Lease:0x63c281d4}
	I0114 03:07:25.982620    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ce:df:1a:3e:3:8a ID:1,ce:df:1a:3e:3:8a Lease:0x63c3d309}
	I0114 03:07:25.982629    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:3e:d3:de:c4:f7:eb ID:1,3e:d3:de:c4:f7:eb Lease:0x63c3d2c8}
	I0114 03:07:25.982637    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:1a:28:1:9c:82:12 ID:1,1a:28:1:9c:82:12 Lease:0x63c3d214}
	I0114 03:07:25.982644    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:92:ab:6d:d7:aa:1e ID:1,92:ab:6d:d7:aa:1e Lease:0x63c3d114}
	I0114 03:07:25.982653    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:5a:4f:b9:38:5f:fe ID:1,5a:4f:b9:38:5f:fe Lease:0x63c27f89}
	I0114 03:07:25.982660    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ba:ae:dd:d2:6:79 ID:1,ba:ae:dd:d2:6:79 Lease:0x63c27f5b}
	I0114 03:07:27.984344    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Attempt 4
	I0114 03:07:27.984372    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 03:07:27.984500    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | hyperkit pid from json: 9258
	I0114 03:07:27.985325    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Searching for aa:b9:cb:46:9b:fa in /var/db/dhcpd_leases ...
	I0114 03:07:27.985469    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0114 03:07:27.985477    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:1a:11:4f:a1:6e:db ID:1,1a:11:4f:a1:6e:db Lease:0x63c3ddfe}
	I0114 03:07:27.985494    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:ce:2c:ac:f7:ed:ae ID:1,ce:2c:ac:f7:ed:ae Lease:0x63c3ddd7}
	I0114 03:07:27.985505    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:96:da:f:12:7c:f2 ID:1,96:da:f:12:7c:f2 Lease:0x63c3ddc3}
	I0114 03:07:27.985515    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:d2:f4:bd:11:dd:76 ID:1,d2:f4:bd:11:dd:76 Lease:0x63c28c42}
	I0114 03:07:27.985521    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:2e:e5:4f:f:5e:6 ID:1,2e:e5:4f:f:5e:6 Lease:0x63c28bc1}
	I0114 03:07:27.985527    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:8e:7b:14:29:f7:c6 ID:1,8e:7b:14:29:f7:c6 Lease:0x63c28bb7}
	I0114 03:07:27.985532    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:6:5b:1:d4:18:92 ID:1,6:5b:1:d4:18:92 Lease:0x63c28b74}
	I0114 03:07:27.985537    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:2a:24:5:87:10:20 ID:1,2a:24:5:87:10:20 Lease:0x63c28a0a}
	I0114 03:07:27.985542    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:96:a:1:d1:48:53 ID:1,96:a:1:d1:48:53 Lease:0x63c289a5}
	I0114 03:07:27.985551    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:ee:43:14:db:7e:45 ID:1,ee:43:14:db:7e:45 Lease:0x63c3da55}
	I0114 03:07:27.985560    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:6e:83:60:c7:cb:4 ID:1,6e:83:60:c7:cb:4 Lease:0x63c288c6}
	I0114 03:07:27.985565    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:22:9a:fb:92:46:f1 ID:1,22:9a:fb:92:46:f1 Lease:0x63c28653}
	I0114 03:07:27.985586    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:de:18:9:d5:68:d6 ID:1,de:18:9:d5:68:d6 Lease:0x63c288cb}
	I0114 03:07:27.985618    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:5e:6f:5:10:ab:29 ID:1,5e:6f:5:10:ab:29 Lease:0x63c288c9}
	I0114 03:07:27.985627    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:26:65:35:f5:e7:2 ID:1,26:65:35:f5:e7:2 Lease:0x63c2820f}
	I0114 03:07:27.985635    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:d6:bb:2b:34:78:1 ID:1,d6:bb:2b:34:78:1 Lease:0x63c281f9}
	I0114 03:07:27.985655    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:32:c3:21:b7:19:cc ID:1,32:c3:21:b7:19:cc Lease:0x63c281d4}
	I0114 03:07:27.985691    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ce:df:1a:3e:3:8a ID:1,ce:df:1a:3e:3:8a Lease:0x63c3d309}
	I0114 03:07:27.985704    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:3e:d3:de:c4:f7:eb ID:1,3e:d3:de:c4:f7:eb Lease:0x63c3d2c8}
	I0114 03:07:27.985728    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:1a:28:1:9c:82:12 ID:1,1a:28:1:9c:82:12 Lease:0x63c3d214}
	I0114 03:07:27.985735    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:92:ab:6d:d7:aa:1e ID:1,92:ab:6d:d7:aa:1e Lease:0x63c3d114}
	I0114 03:07:27.985741    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:5a:4f:b9:38:5f:fe ID:1,5a:4f:b9:38:5f:fe Lease:0x63c27f89}
	I0114 03:07:27.985746    9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ba:ae:dd:d2:6:79 ID:1,ba:ae:dd:d2:6:79 Lease:0x63c27f5b}
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-01-14 11:05:33 UTC, ends at Sat 2023-01-14 11:07:30 UTC. --
	Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.331007077Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/00896af5ccd623a628f391767307c1a9d45e32343eddc996b752a9c7139727f6 pid=6084 runtime=io.containerd.runc.v2
	Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.333349197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.333448259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.333458160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.333734685Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/64b687a4b262b3705a237a5e8f1c05480509b41de28c1a76e6d5f8534499eed9 pid=6100 runtime=io.containerd.runc.v2
	Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.348340304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.348409628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.348419175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.348574713Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6bf08f44884c29bce8afaaee8a369ca1553b77a2f3f362f87893bed08be8580e pid=6134 runtime=io.containerd.runc.v2
	Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.627815369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.627899169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.627909740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.629711389Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/7a4778602ca817386ceb6b83b0cffa2e4273ed22dec5e1bd6af016c2cdbbc152 pid=6375 runtime=io.containerd.runc.v2
	Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.635505843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.635574619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.635585017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.635881814Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/3fdcdd87125fc45218e55627224d289bb364f4e26591a574d4711c1e2bf755db pid=6391 runtime=io.containerd.runc.v2
	Jan 14 11:07:24 pause-030526 dockerd[3914]: time="2023-01-14T11:07:24.738126883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 14 11:07:24 pause-030526 dockerd[3914]: time="2023-01-14T11:07:24.738274110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 11:07:24 pause-030526 dockerd[3914]: time="2023-01-14T11:07:24.738296575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 11:07:24 pause-030526 dockerd[3914]: time="2023-01-14T11:07:24.738419462Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/b511107c0d65ed1187a9182a9b33f82bfbf4fa8cfee81c4ebdc2d2c2fc5ecc42 pid=6710 runtime=io.containerd.runc.v2
	Jan 14 11:07:25 pause-030526 dockerd[3914]: time="2023-01-14T11:07:25.035767688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 14 11:07:25 pause-030526 dockerd[3914]: time="2023-01-14T11:07:25.035869182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 11:07:25 pause-030526 dockerd[3914]: time="2023-01-14T11:07:25.035879278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 11:07:25 pause-030526 dockerd[3914]: time="2023-01-14T11:07:25.036327785Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/4747fe303fd10345c6f83fc3afdd096d34c7cd162e74c11660dbc35198c8c91a pid=6755 runtime=io.containerd.runc.v2
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	4747fe303fd10       6e38f40d628db       7 seconds ago       Running             storage-provisioner       0                   b511107c0d65e
	3fdcdd87125fc       beaaf00edd38a       22 seconds ago      Running             kube-proxy                3                   4e57c85660d83
	7a4778602ca81       5185b96f0becf       22 seconds ago      Running             coredns                   2                   8919d849501d6
	6bf08f44884c2       6d23ec0e8b87e       27 seconds ago      Running             kube-scheduler            3                   687228c21ca63
	64b687a4b262b       6039992312758       27 seconds ago      Running             kube-controller-manager   3                   832b08b9a62e2
	fa0ae81988fe7       0346dbd74bcb9       27 seconds ago      Running             kube-apiserver            3                   ff4b3ee4f8ae5
	00896af5ccd62       a8a176a5d5d69       27 seconds ago      Running             etcd                      3                   ecaeb9f764e75
	a91b8dbf52b28       beaaf00edd38a       40 seconds ago      Created             kube-proxy                2                   be1781a847e83
	4ef492042630b       5185b96f0becf       40 seconds ago      Exited              coredns                   1                   5d6ae273017b7
	1f0472740d8e5       a8a176a5d5d69       40 seconds ago      Exited              etcd                      2                   9307465ae5847
	8cfdb196b1427       6039992312758       40 seconds ago      Exited              kube-controller-manager   2                   c7561d6051ce8
	ec5b05843edc6       0346dbd74bcb9       40 seconds ago      Exited              kube-apiserver            2                   a1988593cada4
	d1df9d20a995d       6d23ec0e8b87e       40 seconds ago      Exited              kube-scheduler            2                   76689e83a5147
	
	* 
	* ==> coredns [4ef492042630] <==
	* [INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7135f430aea492809ab227b028bd16c96f6629e00404d9ec4f44cae029eb3743d1cfe4a9d0cc8fbbd4cfa53556972f2bbf615e7c9e8412e85d290539257166ad
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/errors: 2 8922087648600135430.3435341938167049804. HINFO: dial udp 192.168.64.1:53: connect: network is unreachable
	
	* 
	* ==> coredns [7a4778602ca8] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7135f430aea492809ab227b028bd16c96f6629e00404d9ec4f44cae029eb3743d1cfe4a9d0cc8fbbd4cfa53556972f2bbf615e7c9e8412e85d290539257166ad
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	
	* 
	* ==> describe nodes <==
	* Name:               pause-030526
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-030526
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81
	                    minikube.k8s.io/name=pause-030526
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_14T03_06_03_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Jan 2023 11:06:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-030526
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Jan 2023 11:07:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Jan 2023 11:07:08 +0000   Sat, 14 Jan 2023 11:06:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Jan 2023 11:07:08 +0000   Sat, 14 Jan 2023 11:06:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Jan 2023 11:07:08 +0000   Sat, 14 Jan 2023 11:06:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Jan 2023 11:07:08 +0000   Sat, 14 Jan 2023 11:07:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.64.24
	  Hostname:    pause-030526
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017572Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017572Ki
	  pods:               110
	System Info:
	  Machine ID:                 5158a2f1d68b4728bdca3e981e3d16f1
	  System UUID:                59a511ed-0000-0000-93df-149d997cd0f1
	  Boot ID:                    7071b7f0-575a-4ffd-bad0-919bd7ad3180
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.21
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-565d847f94-wk8g2                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     77s
	  kube-system                 etcd-pause-030526                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         89s
	  kube-system                 kube-apiserver-pause-030526             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-controller-manager-pause-030526    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-proxy-9lkcj                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-scheduler-pause-030526             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 storage-provisioner                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 75s                  kube-proxy       
	  Normal  Starting                 21s                  kube-proxy       
	  Normal  Starting                 54s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  103s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  102s (x7 over 103s)  kubelet          Node pause-030526 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x6 over 103s)  kubelet          Node pause-030526 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x6 over 103s)  kubelet          Node pause-030526 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     89s                  kubelet          Node pause-030526 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  89s                  kubelet          Node pause-030526 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    89s                  kubelet          Node pause-030526 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                89s                  kubelet          Node pause-030526 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  89s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 89s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           78s                  node-controller  Node pause-030526 event: Registered Node pause-030526 in Controller
	  Normal  Starting                 28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s (x8 over 28s)    kubelet          Node pause-030526 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s (x8 over 28s)    kubelet          Node pause-030526 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s (x7 over 28s)    kubelet          Node pause-030526 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11s                  node-controller  Node pause-030526 event: Registered Node pause-030526 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.896084] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.842858] systemd-fstab-generator[530]: Ignoring "noauto" for root device
	[  +0.089665] systemd-fstab-generator[541]: Ignoring "noauto" for root device
	[  +5.167104] systemd-fstab-generator[762]: Ignoring "noauto" for root device
	[  +1.234233] kauditd_printk_skb: 16 callbacks suppressed
	[  +0.224985] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[  +0.092006] systemd-fstab-generator[932]: Ignoring "noauto" for root device
	[  +0.090717] systemd-fstab-generator[943]: Ignoring "noauto" for root device
	[  +1.460171] systemd-fstab-generator[1093]: Ignoring "noauto" for root device
	[  +0.081044] systemd-fstab-generator[1104]: Ignoring "noauto" for root device
	[  +2.991024] systemd-fstab-generator[1323]: Ignoring "noauto" for root device
	[  +0.466189] kauditd_printk_skb: 68 callbacks suppressed
	[Jan14 11:06] systemd-fstab-generator[2009]: Ignoring "noauto" for root device
	[ +12.288147] kauditd_printk_skb: 8 callbacks suppressed
	[ +11.014225] kauditd_printk_skb: 18 callbacks suppressed
	[  +4.097840] systemd-fstab-generator[3037]: Ignoring "noauto" for root device
	[  +0.157534] systemd-fstab-generator[3048]: Ignoring "noauto" for root device
	[  +0.143509] systemd-fstab-generator[3059]: Ignoring "noauto" for root device
	[ +17.238898] systemd-fstab-generator[4389]: Ignoring "noauto" for root device
	[  +0.099215] systemd-fstab-generator[4443]: Ignoring "noauto" for root device
	[Jan14 11:07] kauditd_printk_skb: 31 callbacks suppressed
	[  +1.304783] systemd-fstab-generator[5886]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [00896af5ccd6] <==
	* {"level":"info","ts":"2023-01-14T11:07:05.150Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"db97d05830b4a428","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-01-14T11:07:05.150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 switched to configuration voters=(15823344892982371368)"}
	{"level":"info","ts":"2023-01-14T11:07:05.150Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f9c405dda3109066","local-member-id":"db97d05830b4a428","added-peer-id":"db97d05830b4a428","added-peer-peer-urls":["https://192.168.64.24:2380"]}
	{"level":"info","ts":"2023-01-14T11:07:05.151Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f9c405dda3109066","local-member-id":"db97d05830b4a428","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T11:07:05.151Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T11:07:05.157Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"db97d05830b4a428","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2023-01-14T11:07:05.158Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-14T11:07:05.169Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"db97d05830b4a428","initial-advertise-peer-urls":["https://192.168.64.24:2380"],"listen-peer-urls":["https://192.168.64.24:2380"],"advertise-client-urls":["https://192.168.64.24:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.24:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-14T11:07:05.169Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-14T11:07:05.158Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.64.24:2380"}
	{"level":"info","ts":"2023-01-14T11:07:05.170Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.64.24:2380"}
	{"level":"info","ts":"2023-01-14T11:07:06.113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 is starting a new election at term 3"}
	{"level":"info","ts":"2023-01-14T11:07:06.114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-01-14T11:07:06.114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 received MsgPreVoteResp from db97d05830b4a428 at term 3"}
	{"level":"info","ts":"2023-01-14T11:07:06.114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 became candidate at term 4"}
	{"level":"info","ts":"2023-01-14T11:07:06.114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 received MsgVoteResp from db97d05830b4a428 at term 4"}
	{"level":"info","ts":"2023-01-14T11:07:06.114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 became leader at term 4"}
	{"level":"info","ts":"2023-01-14T11:07:06.114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: db97d05830b4a428 elected leader db97d05830b4a428 at term 4"}
	{"level":"info","ts":"2023-01-14T11:07:06.114Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"db97d05830b4a428","local-member-attributes":"{Name:pause-030526 ClientURLs:[https://192.168.64.24:2379]}","request-path":"/0/members/db97d05830b4a428/attributes","cluster-id":"f9c405dda3109066","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-14T11:07:06.114Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T11:07:06.115Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T11:07:06.115Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.64.24:2379"}
	{"level":"info","ts":"2023-01-14T11:07:06.116Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-14T11:07:06.116Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-14T11:07:06.116Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [1f0472740d8e] <==
	* 
	* 
	* ==> kernel <==
	*  11:07:31 up 2 min,  0 users,  load average: 0.60, 0.29, 0.11
	Linux pause-030526 5.10.57 #1 SMP Thu Nov 17 20:18:45 UTC 2022 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [ec5b05843edc] <==
	* 
	* 
	* ==> kube-apiserver [fa0ae81988fe] <==
	* I0114 11:07:07.835235       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0114 11:07:07.835321       1 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller
	I0114 11:07:07.835731       1 autoregister_controller.go:141] Starting autoregister controller
	I0114 11:07:07.835826       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0114 11:07:07.856121       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0114 11:07:07.856531       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0114 11:07:07.858292       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0114 11:07:07.858319       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0114 11:07:07.958455       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0114 11:07:08.030449       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0114 11:07:08.031216       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0114 11:07:08.032075       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0114 11:07:08.032931       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0114 11:07:08.035463       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0114 11:07:08.035912       1 cache.go:39] Caches are synced for autoregister controller
	I0114 11:07:08.037457       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0114 11:07:08.631959       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0114 11:07:08.834884       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0114 11:07:09.433088       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0114 11:07:09.439315       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0114 11:07:09.467659       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0114 11:07:09.481246       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0114 11:07:09.492033       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0114 11:07:20.415432       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0114 11:07:20.584645       1 controller.go:616] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [64b687a4b262] <==
	* I0114 11:07:20.459506       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0114 11:07:20.462354       1 shared_informer.go:262] Caches are synced for expand
	I0114 11:07:20.462369       1 shared_informer.go:262] Caches are synced for namespace
	I0114 11:07:20.462460       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0114 11:07:20.465035       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0114 11:07:20.469843       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0114 11:07:20.469889       1 shared_informer.go:262] Caches are synced for TTL
	I0114 11:07:20.472294       1 shared_informer.go:262] Caches are synced for taint
	I0114 11:07:20.472382       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	I0114 11:07:20.472447       1 taint_manager.go:209] "Sending events to api server"
	I0114 11:07:20.472424       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	W0114 11:07:20.472799       1 node_lifecycle_controller.go:1058] Missing timestamp for Node pause-030526. Assuming now as a timestamp.
	I0114 11:07:20.472929       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I0114 11:07:20.473272       1 event.go:294] "Event occurred" object="pause-030526" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-030526 event: Registered Node pause-030526 in Controller"
	I0114 11:07:20.480533       1 shared_informer.go:262] Caches are synced for daemon sets
	I0114 11:07:20.490072       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0114 11:07:20.496927       1 shared_informer.go:262] Caches are synced for HPA
	I0114 11:07:20.574770       1 shared_informer.go:262] Caches are synced for endpoint
	I0114 11:07:20.589017       1 shared_informer.go:262] Caches are synced for disruption
	I0114 11:07:20.591967       1 shared_informer.go:262] Caches are synced for resource quota
	I0114 11:07:20.598516       1 shared_informer.go:262] Caches are synced for stateful set
	I0114 11:07:20.621015       1 shared_informer.go:262] Caches are synced for resource quota
	I0114 11:07:21.005895       1 shared_informer.go:262] Caches are synced for garbage collector
	I0114 11:07:21.066929       1 shared_informer.go:262] Caches are synced for garbage collector
	I0114 11:07:21.067007       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-controller-manager [8cfdb196b142] <==
	* 
	* 
	* ==> kube-proxy [3fdcdd87125f] <==
	* I0114 11:07:09.764942       1 node.go:163] Successfully retrieved node IP: 192.168.64.24
	I0114 11:07:09.765007       1 server_others.go:138] "Detected node IP" address="192.168.64.24"
	I0114 11:07:09.765022       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0114 11:07:09.789595       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0114 11:07:09.789674       1 server_others.go:206] "Using iptables Proxier"
	I0114 11:07:09.789705       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 11:07:09.789866       1 server.go:661] "Version info" version="v1.25.3"
	I0114 11:07:09.789895       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 11:07:09.791171       1 config.go:317] "Starting service config controller"
	I0114 11:07:09.791204       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0114 11:07:09.791233       1 config.go:226] "Starting endpoint slice config controller"
	I0114 11:07:09.791257       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0114 11:07:09.792096       1 config.go:444] "Starting node config controller"
	I0114 11:07:09.792122       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0114 11:07:09.892056       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0114 11:07:09.892223       1 shared_informer.go:262] Caches are synced for node config
	I0114 11:07:09.892063       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-proxy [a91b8dbf52b2] <==
	* 
	* 
	* ==> kube-scheduler [6bf08f44884c] <==
	* I0114 11:07:05.785495       1 serving.go:348] Generated self-signed cert in-memory
	W0114 11:07:07.930966       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0114 11:07:07.931087       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0114 11:07:07.931149       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0114 11:07:07.931342       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0114 11:07:07.946916       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I0114 11:07:07.946999       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 11:07:07.947953       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0114 11:07:07.948063       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0114 11:07:07.949707       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0114 11:07:07.948083       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0114 11:07:07.964273       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0114 11:07:07.964466       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0114 11:07:07.964647       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0114 11:07:07.964697       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0114 11:07:07.964834       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0114 11:07:07.964957       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0114 11:07:08.050308       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [d1df9d20a995] <==
	* I0114 11:06:52.791744       1 serving.go:348] Generated self-signed cert in-memory
	W0114 11:06:53.280219       1 authentication.go:346] Error looking up in-cluster authentication configuration: Get "https://192.168.64.24:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.64.24:8443: connect: connection refused
	W0114 11:06:53.280234       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0114 11:06:53.280239       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0114 11:06:53.282436       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I0114 11:06:53.282467       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 11:06:53.284472       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0114 11:06:53.284543       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0114 11:06:53.284551       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0114 11:06:53.284723       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0114 11:06:53.284834       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0114 11:06:53.284988       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0114 11:06:53.285446       1 shared_informer.go:258] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0114 11:06:53.285486       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0114 11:06:53.285807       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-01-14 11:05:33 UTC, ends at Sat 2023-01-14 11:07:32 UTC. --
	Jan 14 11:07:07 pause-030526 kubelet[5892]: E0114 11:07:07.304093    5892 kubelet.go:2448] "Error getting node" err="node \"pause-030526\" not found"
	Jan 14 11:07:07 pause-030526 kubelet[5892]: E0114 11:07:07.404883    5892 kubelet.go:2448] "Error getting node" err="node \"pause-030526\" not found"
	Jan 14 11:07:07 pause-030526 kubelet[5892]: E0114 11:07:07.505492    5892 kubelet.go:2448] "Error getting node" err="node \"pause-030526\" not found"
	Jan 14 11:07:07 pause-030526 kubelet[5892]: E0114 11:07:07.606226    5892 kubelet.go:2448] "Error getting node" err="node \"pause-030526\" not found"
	Jan 14 11:07:07 pause-030526 kubelet[5892]: E0114 11:07:07.706862    5892 kubelet.go:2448] "Error getting node" err="node \"pause-030526\" not found"
	Jan 14 11:07:07 pause-030526 kubelet[5892]: E0114 11:07:07.807067    5892 kubelet.go:2448] "Error getting node" err="node \"pause-030526\" not found"
	Jan 14 11:07:07 pause-030526 kubelet[5892]: I0114 11:07:07.908125    5892 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 14 11:07:07 pause-030526 kubelet[5892]: I0114 11:07:07.909327    5892 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.055387    5892 kubelet_node_status.go:108] "Node was previously registered" node="pause-030526"
	Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.055555    5892 kubelet_node_status.go:73] "Successfully registered node" node="pause-030526"
	Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.675696    5892 apiserver.go:52] "Watching apiserver"
	Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.677949    5892 topology_manager.go:205] "Topology Admit Handler"
	Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.677999    5892 topology_manager.go:205] "Topology Admit Handler"
	Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.821148    5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72x7q\" (UniqueName: \"kubernetes.io/projected/eff0eea5-423e-4f30-9cc7-f0a187ccfbe4-kube-api-access-72x7q\") pod \"coredns-565d847f94-wk8g2\" (UID: \"eff0eea5-423e-4f30-9cc7-f0a187ccfbe4\") " pod="kube-system/coredns-565d847f94-wk8g2"
	Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.821518    5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/937abbd6-9bb6-4df5-bda8-a01348c80cfa-kube-proxy\") pod \"kube-proxy-9lkcj\" (UID: \"937abbd6-9bb6-4df5-bda8-a01348c80cfa\") " pod="kube-system/kube-proxy-9lkcj"
	Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.821683    5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eff0eea5-423e-4f30-9cc7-f0a187ccfbe4-config-volume\") pod \"coredns-565d847f94-wk8g2\" (UID: \"eff0eea5-423e-4f30-9cc7-f0a187ccfbe4\") " pod="kube-system/coredns-565d847f94-wk8g2"
	Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.821740    5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/937abbd6-9bb6-4df5-bda8-a01348c80cfa-xtables-lock\") pod \"kube-proxy-9lkcj\" (UID: \"937abbd6-9bb6-4df5-bda8-a01348c80cfa\") " pod="kube-system/kube-proxy-9lkcj"
	Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.821852    5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/937abbd6-9bb6-4df5-bda8-a01348c80cfa-lib-modules\") pod \"kube-proxy-9lkcj\" (UID: \"937abbd6-9bb6-4df5-bda8-a01348c80cfa\") " pod="kube-system/kube-proxy-9lkcj"
	Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.821958    5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmt7j\" (UniqueName: \"kubernetes.io/projected/937abbd6-9bb6-4df5-bda8-a01348c80cfa-kube-api-access-zmt7j\") pod \"kube-proxy-9lkcj\" (UID: \"937abbd6-9bb6-4df5-bda8-a01348c80cfa\") " pod="kube-system/kube-proxy-9lkcj"
	Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.822000    5892 reconciler.go:169] "Reconciler: start to sync state"
	Jan 14 11:07:09 pause-030526 kubelet[5892]: I0114 11:07:09.578961    5892 scope.go:115] "RemoveContainer" containerID="a91b8dbf52b2899bfa63a86f3b29f268678711d37ac71fba7ef99acfabef6696"
	Jan 14 11:07:09 pause-030526 kubelet[5892]: I0114 11:07:09.579108    5892 scope.go:115] "RemoveContainer" containerID="4ef492042630b948c5a7cf8834310194a4c1a14d0407a74904076077074843a0"
	Jan 14 11:07:24 pause-030526 kubelet[5892]: I0114 11:07:24.345531    5892 topology_manager.go:205] "Topology Admit Handler"
	Jan 14 11:07:24 pause-030526 kubelet[5892]: I0114 11:07:24.451536    5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/14a8b558-cad1-44aa-8434-e31a93fcc6e0-tmp\") pod \"storage-provisioner\" (UID: \"14a8b558-cad1-44aa-8434-e31a93fcc6e0\") " pod="kube-system/storage-provisioner"
	Jan 14 11:07:24 pause-030526 kubelet[5892]: I0114 11:07:24.451687    5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjxk6\" (UniqueName: \"kubernetes.io/projected/14a8b558-cad1-44aa-8434-e31a93fcc6e0-kube-api-access-rjxk6\") pod \"storage-provisioner\" (UID: \"14a8b558-cad1-44aa-8434-e31a93fcc6e0\") " pod="kube-system/storage-provisioner"
	
	* 
	* ==> storage-provisioner [4747fe303fd1] <==
	* I0114 11:07:25.092946       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0114 11:07:25.101139       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0114 11:07:25.101183       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0114 11:07:25.105549       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0114 11:07:25.106105       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-030526_0329eba3-6dd1-4234-8e96-6a02360c4ff9!
	I0114 11:07:25.107230       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"578576fd-279f-4e3d-946a-2f8e3400fd7a", APIVersion:"v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-030526_0329eba3-6dd1-4234-8e96-6a02360c4ff9 became leader
	I0114 11:07:25.207006       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-030526_0329eba3-6dd1-4234-8e96-6a02360c4ff9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-030526 -n pause-030526
helpers_test.go:261: (dbg) Run:  kubectl --context pause-030526 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context pause-030526 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-030526 describe pod : exit status 1 (39.488441ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context pause-030526 describe pod : exit status 1
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (64.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (52.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.102039168s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.119251416s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.105134512s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.117906648s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.101433891s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0114 03:17:45.064737    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/false-025507/client.crt: no such file or directory
E0114 03:17:45.069890    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/false-025507/client.crt: no such file or directory
E0114 03:17:45.080219    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/false-025507/client.crt: no such file or directory
E0114 03:17:45.100277    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/false-025507/client.crt: no such file or directory
E0114 03:17:45.140772    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/false-025507/client.crt: no such file or directory
E0114 03:17:45.221682    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/false-025507/client.crt: no such file or directory
E0114 03:17:45.381787    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/false-025507/client.crt: no such file or directory
E0114 03:17:45.702566    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/false-025507/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0114 03:17:46.342687    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/false-025507/client.crt: no such file or directory
E0114 03:17:47.623830    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/false-025507/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.110684761s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0114 03:17:55.305545    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/false-025507/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.114816459s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:243: failed to connect via pod host: exit status 1
--- FAIL: TestNetworkPlugins/group/kubenet/HairPin (52.87s)
E0114 03:37:22.952939    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/old-k8s-version-031958/client.crt: no such file or directory

                                                
                                    

Test pass (283/302)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 18.49
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.3
10 TestDownloadOnly/v1.25.3/json-events 11.88
11 TestDownloadOnly/v1.25.3/preload-exists 0
14 TestDownloadOnly/v1.25.3/kubectl 0
15 TestDownloadOnly/v1.25.3/LogsDuration 0.3
16 TestDownloadOnly/DeleteAll 0.42
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.39
19 TestBinaryMirror 1
20 TestOffline 428.61
22 TestAddons/Setup 138.82
24 TestAddons/parallel/Registry 16.53
25 TestAddons/parallel/Ingress 19.75
26 TestAddons/parallel/MetricsServer 5.48
27 TestAddons/parallel/HelmTiller 12.57
29 TestAddons/parallel/CSI 44.56
30 TestAddons/parallel/Headlamp 10.26
31 TestAddons/parallel/CloudSpanner 5.33
34 TestAddons/serial/GCPAuth/Namespaces 0.1
35 TestAddons/StoppedEnableDisable 3.61
36 TestCertOptions 41.05
37 TestCertExpiration 255.49
38 TestDockerFlags 46.41
39 TestForceSystemdFlag 43.63
40 TestForceSystemdEnv 41.45
42 TestHyperKitDriverInstallOrUpdate 8.36
45 TestErrorSpam/setup 37.8
46 TestErrorSpam/start 1.37
47 TestErrorSpam/status 0.5
48 TestErrorSpam/pause 1.33
49 TestErrorSpam/unpause 1.34
50 TestErrorSpam/stop 3.68
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 56.12
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 39.59
57 TestFunctional/serial/KubeContext 0.03
58 TestFunctional/serial/KubectlGetPods 0.07
61 TestFunctional/serial/CacheCmd/cache/add_remote 8.77
62 TestFunctional/serial/CacheCmd/cache/add_local 1.52
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
64 TestFunctional/serial/CacheCmd/cache/list 0.08
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.17
66 TestFunctional/serial/CacheCmd/cache/cache_reload 2.05
67 TestFunctional/serial/CacheCmd/cache/delete 0.18
68 TestFunctional/serial/MinikubeKubectlCmd 0.5
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.68
70 TestFunctional/serial/ExtraConfig 58.38
71 TestFunctional/serial/ComponentHealth 0.05
72 TestFunctional/serial/LogsCmd 2.64
73 TestFunctional/serial/LogsFileCmd 2.77
75 TestFunctional/parallel/ConfigCmd 0.51
76 TestFunctional/parallel/DashboardCmd 8.4
77 TestFunctional/parallel/DryRun 0.9
78 TestFunctional/parallel/InternationalLanguage 0.48
79 TestFunctional/parallel/StatusCmd 0.48
82 TestFunctional/parallel/ServiceCmd 13.27
83 TestFunctional/parallel/ServiceCmdConnect 8.57
84 TestFunctional/parallel/AddonsCmd 0.27
85 TestFunctional/parallel/PersistentVolumeClaim 24
87 TestFunctional/parallel/SSHCmd 0.32
88 TestFunctional/parallel/CpCmd 0.66
89 TestFunctional/parallel/MySQL 22.46
90 TestFunctional/parallel/FileSync 0.19
91 TestFunctional/parallel/CertSync 1.11
95 TestFunctional/parallel/NodeLabels 0.08
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.13
99 TestFunctional/parallel/License 0.86
100 TestFunctional/parallel/Version/short 0.1
101 TestFunctional/parallel/Version/components 0.61
102 TestFunctional/parallel/ImageCommands/ImageListShort 0.16
103 TestFunctional/parallel/ImageCommands/ImageListTable 0.18
104 TestFunctional/parallel/ImageCommands/ImageListJson 0.17
105 TestFunctional/parallel/ImageCommands/ImageListYaml 0.16
106 TestFunctional/parallel/ImageCommands/ImageBuild 4.03
107 TestFunctional/parallel/ImageCommands/Setup 3.56
108 TestFunctional/parallel/DockerEnv/bash 0.85
109 TestFunctional/parallel/UpdateContextCmd/no_changes 0.27
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
112 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.24
113 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.23
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.57
115 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.95
116 TestFunctional/parallel/ImageCommands/ImageRemove 0.36
117 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.1
118 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.36
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.14
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.03
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
127 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
130 TestFunctional/parallel/ProfileCmd/profile_list 0.29
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
132 TestFunctional/parallel/MountCmd/any-port 8.25
133 TestFunctional/parallel/MountCmd/specific-port 1.37
134 TestFunctional/delete_addon-resizer_images 0.16
135 TestFunctional/delete_my-image_image 0.06
136 TestFunctional/delete_minikube_cached_images 0.06
139 TestIngressAddonLegacy/StartLegacyK8sCluster 120.09
141 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 16.74
142 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.49
143 TestIngressAddonLegacy/serial/ValidateIngressAddons 46.86
146 TestJSONOutput/start/Command 53.64
147 TestJSONOutput/start/Audit 0
149 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
150 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
152 TestJSONOutput/pause/Command 0.5
153 TestJSONOutput/pause/Audit 0
155 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
156 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
158 TestJSONOutput/unpause/Command 0.48
159 TestJSONOutput/unpause/Audit 0
161 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
162 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
164 TestJSONOutput/stop/Command 8.16
165 TestJSONOutput/stop/Audit 0
167 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
168 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
169 TestErrorJSONOutput 0.74
174 TestMainNoArgs 0.08
175 TestMinikubeProfile 90.97
178 TestMountStart/serial/StartWithMountFirst 15.15
179 TestMountStart/serial/VerifyMountFirst 0.3
180 TestMountStart/serial/StartWithMountSecond 14.47
181 TestMountStart/serial/VerifyMountSecond 0.29
182 TestMountStart/serial/DeleteFirst 2.37
183 TestMountStart/serial/VerifyMountPostDelete 0.29
184 TestMountStart/serial/Stop 2.24
185 TestMountStart/serial/RestartStopped 16.44
186 TestMountStart/serial/VerifyMountPostStop 0.31
189 TestMultiNode/serial/FreshStart2Nodes 136.94
190 TestMultiNode/serial/DeployApp2Nodes 5.41
191 TestMultiNode/serial/PingHostFrom2Pods 0.85
192 TestMultiNode/serial/AddNode 41.18
193 TestMultiNode/serial/ProfileList 0.21
194 TestMultiNode/serial/CopyFile 5.47
195 TestMultiNode/serial/StopNode 2.69
196 TestMultiNode/serial/StartAfterStop 31.09
197 TestMultiNode/serial/RestartKeepsNodes 862.79
198 TestMultiNode/serial/DeleteNode 4.99
199 TestMultiNode/serial/StopMultiNode 4.44
200 TestMultiNode/serial/RestartMultiNode 578.57
201 TestMultiNode/serial/ValidateNameConflict 43.95
205 TestPreload 136.4
207 TestScheduledStopUnix 108.93
208 TestSkaffold 74.14
211 TestRunningBinaryUpgrade 162.8
213 TestKubernetesUpgrade 139.11
226 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.85
227 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.95
228 TestNetworkPlugins/group/auto/Start 405.6
229 TestNetworkPlugins/group/auto/KubeletFlags 0.15
230 TestNetworkPlugins/group/auto/NetCatPod 13.23
231 TestNetworkPlugins/group/auto/DNS 0.12
232 TestNetworkPlugins/group/auto/Localhost 0.11
233 TestNetworkPlugins/group/auto/HairPin 5.1
234 TestStoppedBinaryUpgrade/Setup 1.78
235 TestStoppedBinaryUpgrade/Upgrade 170.06
236 TestStoppedBinaryUpgrade/MinikubeLogs 2.62
245 TestPause/serial/Start 62.33
248 TestNoKubernetes/serial/StartNoK8sWithVersion 0.49
249 TestNoKubernetes/serial/StartWithK8s 42.38
250 TestNetworkPlugins/group/cilium/Start 105.29
251 TestNoKubernetes/serial/StartWithStopK8s 16.48
252 TestNoKubernetes/serial/Start 14.93
253 TestNoKubernetes/serial/VerifyK8sNotRunning 0.17
254 TestNoKubernetes/serial/ProfileList 0.63
255 TestNoKubernetes/serial/Stop 2.25
256 TestNoKubernetes/serial/StartNoArgs 15.13
257 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.13
258 TestNetworkPlugins/group/calico/Start 308.14
259 TestNetworkPlugins/group/cilium/ControllerPod 5.02
260 TestNetworkPlugins/group/cilium/KubeletFlags 0.15
261 TestNetworkPlugins/group/cilium/NetCatPod 13.75
262 TestNetworkPlugins/group/cilium/DNS 0.15
263 TestNetworkPlugins/group/cilium/Localhost 0.12
264 TestNetworkPlugins/group/cilium/HairPin 0.1
265 TestNetworkPlugins/group/custom-flannel/Start 57.88
266 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.15
267 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.18
268 TestNetworkPlugins/group/custom-flannel/DNS 0.12
269 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
270 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
271 TestNetworkPlugins/group/false/Start 101.35
272 TestNetworkPlugins/group/false/KubeletFlags 0.15
273 TestNetworkPlugins/group/false/NetCatPod 12.19
274 TestNetworkPlugins/group/false/DNS 0.11
275 TestNetworkPlugins/group/false/Localhost 0.1
276 TestNetworkPlugins/group/false/HairPin 5.11
277 TestNetworkPlugins/group/kindnet/Start 70.54
278 TestNetworkPlugins/group/calico/ControllerPod 5.01
279 TestNetworkPlugins/group/calico/KubeletFlags 0.15
280 TestNetworkPlugins/group/calico/NetCatPod 13.29
281 TestNetworkPlugins/group/kindnet/ControllerPod 5.01
282 TestNetworkPlugins/group/calico/DNS 0.14
283 TestNetworkPlugins/group/calico/Localhost 0.11
284 TestNetworkPlugins/group/calico/HairPin 0.11
285 TestNetworkPlugins/group/kindnet/KubeletFlags 0.15
286 TestNetworkPlugins/group/kindnet/NetCatPod 11.22
287 TestNetworkPlugins/group/flannel/Start 54.36
288 TestNetworkPlugins/group/kindnet/DNS 0.12
289 TestNetworkPlugins/group/kindnet/Localhost 0.12
290 TestNetworkPlugins/group/kindnet/HairPin 0.11
291 TestNetworkPlugins/group/enable-default-cni/Start 55.94
292 TestNetworkPlugins/group/flannel/ControllerPod 8.01
293 TestNetworkPlugins/group/flannel/KubeletFlags 0.17
294 TestNetworkPlugins/group/flannel/NetCatPod 12.23
295 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.15
296 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.2
297 TestNetworkPlugins/group/flannel/DNS 0.13
298 TestNetworkPlugins/group/flannel/Localhost 0.12
299 TestNetworkPlugins/group/flannel/HairPin 0.11
300 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
301 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
302 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
303 TestNetworkPlugins/group/bridge/Start 53.81
304 TestNetworkPlugins/group/kubenet/Start 64.68
305 TestNetworkPlugins/group/bridge/KubeletFlags 0.15
306 TestNetworkPlugins/group/bridge/NetCatPod 13.19
307 TestNetworkPlugins/group/bridge/DNS 0.12
308 TestNetworkPlugins/group/bridge/Localhost 0.11
309 TestNetworkPlugins/group/bridge/HairPin 0.11
310 TestNetworkPlugins/group/kubenet/KubeletFlags 0.15
311 TestNetworkPlugins/group/kubenet/NetCatPod 13.19
312 TestNetworkPlugins/group/kubenet/DNS 0.14
313 TestNetworkPlugins/group/kubenet/Localhost 0.13
316 TestStartStop/group/old-k8s-version/serial/FirstStart 144.65
317 TestStartStop/group/old-k8s-version/serial/DeployApp 10.31
319 TestStartStop/group/no-preload/serial/FirstStart 66.56
320 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.72
321 TestStartStop/group/old-k8s-version/serial/Stop 1.25
322 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.31
323 TestStartStop/group/old-k8s-version/serial/SecondStart 460.51
324 TestStartStop/group/no-preload/serial/DeployApp 10.27
325 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.62
326 TestStartStop/group/no-preload/serial/Stop 8.25
327 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.31
328 TestStartStop/group/no-preload/serial/SecondStart 316.29
329 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.01
330 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
331 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.2
332 TestStartStop/group/no-preload/serial/Pause 1.94
334 TestStartStop/group/embed-certs/serial/FirstStart 59.63
335 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
336 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
337 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.19
338 TestStartStop/group/old-k8s-version/serial/Pause 2.13
339 TestStartStop/group/embed-certs/serial/DeployApp 10.27
341 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 56.21
342 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.74
343 TestStartStop/group/embed-certs/serial/Stop 3.28
344 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.26
345 TestStartStop/group/embed-certs/serial/SecondStart 315.58
346 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.27
347 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.66
348 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.24
349 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.3
350 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 310.54
351 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 14.01
352 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
353 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.17
354 TestStartStop/group/embed-certs/serial/Pause 1.9
356 TestStartStop/group/newest-cni/serial/FirstStart 52.84
357 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.01
358 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
359 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.17
360 TestStartStop/group/default-k8s-diff-port/serial/Pause 1.96
361 TestStartStop/group/newest-cni/serial/DeployApp 0
362 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.96
363 TestStartStop/group/newest-cni/serial/Stop 3.25
364 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.3
365 TestStartStop/group/newest-cni/serial/SecondStart 29.49
366 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
367 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
368 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.18
369 TestStartStop/group/newest-cni/serial/Pause 1.9
x
+
TestDownloadOnly/v1.16.0/json-events (18.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-020518 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-020518 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit : (18.486512611s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (18.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-020518
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-020518: exit status 85 (296.496107ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-020518 | jenkins | v1.28.0 | 14 Jan 23 02:05 PST |          |
	|         | -p download-only-020518        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 02:05:18
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 02:05:18.960806    2919 out.go:296] Setting OutFile to fd 1 ...
	I0114 02:05:18.960986    2919 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:05:18.960992    2919 out.go:309] Setting ErrFile to fd 2...
	I0114 02:05:18.960996    2919 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:05:18.961122    2919 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1627/.minikube/bin
	W0114 02:05:18.961245    2919 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15642-1627/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15642-1627/.minikube/config/config.json: no such file or directory
	I0114 02:05:18.962010    2919 out.go:303] Setting JSON to true
	I0114 02:05:18.980464    2919 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":291,"bootTime":1673690427,"procs":397,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0114 02:05:18.980552    2919 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 02:05:19.003568    2919 out.go:97] [download-only-020518] minikube v1.28.0 on Darwin 13.0.1
	I0114 02:05:19.003801    2919 notify.go:220] Checking for updates...
	W0114 02:05:19.003826    2919 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15642-1627/.minikube/cache/preloaded-tarball: no such file or directory
	I0114 02:05:19.024117    2919 out.go:169] MINIKUBE_LOCATION=15642
	I0114 02:05:19.045580    2919 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15642-1627/kubeconfig
	I0114 02:05:19.067517    2919 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0114 02:05:19.089356    2919 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 02:05:19.110523    2919 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1627/.minikube
	W0114 02:05:19.153960    2919 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0114 02:05:19.154290    2919 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 02:05:19.319237    2919 out.go:97] Using the hyperkit driver based on user configuration
	I0114 02:05:19.319315    2919 start.go:294] selected driver: hyperkit
	I0114 02:05:19.319337    2919 start.go:838] validating driver "hyperkit" against <nil>
	I0114 02:05:19.319483    2919 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 02:05:19.319881    2919 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15642-1627/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0114 02:05:19.457179    2919 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.28.0
	I0114 02:05:19.461829    2919 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 02:05:19.461845    2919 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0114 02:05:19.461883    2919 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0114 02:05:19.465757    2919 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0114 02:05:19.465863    2919 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0114 02:05:19.465893    2919 cni.go:95] Creating CNI manager for ""
	I0114 02:05:19.465901    2919 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 02:05:19.465913    2919 start_flags.go:319] config:
	{Name:download-only-020518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-020518 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:05:19.466128    2919 iso.go:125] acquiring lock: {Name:mkf812bef4e208b28a360507a7c86d17e208f6c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 02:05:19.487616    2919 out.go:97] Downloading VM boot image ...
	I0114 02:05:19.487787    2919 download.go:101] Downloading: https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/15642-1627/.minikube/cache/iso/amd64/minikube-v1.28.0-1668700269-15235-amd64.iso
	I0114 02:05:26.480630    2919 out.go:97] Starting control plane node download-only-020518 in cluster download-only-020518
	I0114 02:05:26.480731    2919 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0114 02:05:26.596072    2919 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0114 02:05:26.596104    2919 cache.go:57] Caching tarball of preloaded images
	I0114 02:05:26.596458    2919 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0114 02:05:26.618035    2919 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0114 02:05:26.618130    2919 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0114 02:05:26.865641    2919 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/15642-1627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-020518"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/json-events (11.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-020518 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-020518 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=hyperkit : (11.876158s)
--- PASS: TestDownloadOnly/v1.25.3/json-events (11.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/preload-exists
--- PASS: TestDownloadOnly/v1.25.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/kubectl
--- PASS: TestDownloadOnly/v1.25.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-020518
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-020518: exit status 85 (301.045608ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-020518 | jenkins | v1.28.0 | 14 Jan 23 02:05 PST |          |
	|         | -p download-only-020518        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-020518 | jenkins | v1.28.0 | 14 Jan 23 02:05 PST |          |
	|         | -p download-only-020518        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.25.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 02:05:37
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 02:05:37.746226    2946 out.go:296] Setting OutFile to fd 1 ...
	I0114 02:05:37.746397    2946 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:05:37.746403    2946 out.go:309] Setting ErrFile to fd 2...
	I0114 02:05:37.746410    2946 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:05:37.746526    2946 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1627/.minikube/bin
	W0114 02:05:37.746629    2946 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15642-1627/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15642-1627/.minikube/config/config.json: no such file or directory
	I0114 02:05:37.746987    2946 out.go:303] Setting JSON to true
	I0114 02:05:37.766173    2946 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":310,"bootTime":1673690427,"procs":397,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0114 02:05:37.766294    2946 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 02:05:37.788533    2946 out.go:97] [download-only-020518] minikube v1.28.0 on Darwin 13.0.1
	I0114 02:05:37.788748    2946 notify.go:220] Checking for updates...
	I0114 02:05:37.810444    2946 out.go:169] MINIKUBE_LOCATION=15642
	I0114 02:05:37.832298    2946 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15642-1627/kubeconfig
	I0114 02:05:37.854602    2946 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0114 02:05:37.876625    2946 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 02:05:37.898394    2946 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1627/.minikube
	W0114 02:05:37.942151    2946 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0114 02:05:37.942687    2946 config.go:180] Loaded profile config "download-only-020518": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0114 02:05:37.942750    2946 start.go:746] api.Load failed for download-only-020518: filestore "download-only-020518": Docker machine "download-only-020518" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0114 02:05:37.942808    2946 driver.go:365] Setting default libvirt URI to qemu:///system
	W0114 02:05:37.942835    2946 start.go:746] api.Load failed for download-only-020518: filestore "download-only-020518": Docker machine "download-only-020518" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0114 02:05:37.971141    2946 out.go:97] Using the hyperkit driver based on existing profile
	I0114 02:05:37.971220    2946 start.go:294] selected driver: hyperkit
	I0114 02:05:37.971230    2946 start.go:838] validating driver "hyperkit" against &{Name:download-only-020518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-020518 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:05:37.971493    2946 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 02:05:37.971700    2946 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15642-1627/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0114 02:05:37.979713    2946 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.28.0
	I0114 02:05:37.983005    2946 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 02:05:37.983021    2946 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0114 02:05:37.985145    2946 cni.go:95] Creating CNI manager for ""
	I0114 02:05:37.985160    2946 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 02:05:37.985178    2946 start_flags.go:319] config:
	{Name:download-only-020518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:download-only-020518 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:05:37.985295    2946 iso.go:125] acquiring lock: {Name:mkf812bef4e208b28a360507a7c86d17e208f6c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 02:05:38.006268    2946 out.go:97] Starting control plane node download-only-020518 in cluster download-only-020518
	I0114 02:05:38.006305    2946 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 02:05:38.111572    2946 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0114 02:05:38.111607    2946 cache.go:57] Caching tarball of preloaded images
	I0114 02:05:38.111954    2946 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 02:05:38.133690    2946 out.go:97] Downloading Kubernetes v1.25.3 preload ...
	I0114 02:05:38.133770    2946 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
	I0114 02:05:38.380004    2946 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4?checksum=md5:624cb874287e7e3d793b79e4205a7f98 -> /Users/jenkins/minikube-integration/15642-1627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0114 02:05:46.625406    2946 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
	I0114 02:05:46.625608    2946 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15642-1627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
	I0114 02:05:47.211944    2946 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0114 02:05:47.212024    2946 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/download-only-020518/config.json ...
	I0114 02:05:47.212478    2946 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 02:05:47.212750    2946 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/15642-1627/.minikube/cache/darwin/amd64/v1.25.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-020518"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.3/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.42s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.42s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-020518
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                    
x
+
TestBinaryMirror (1s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-020551 --alsologtostderr --binary-mirror http://127.0.0.1:49392 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-020551" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-020551
--- PASS: TestBinaryMirror (1.00s)

                                                
                                    
x
+
TestOffline (428.61s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-025507 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-025507 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : (7m5.263753529s)
helpers_test.go:175: Cleaning up "offline-docker-025507" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-025507
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-025507: (3.347067778s)
--- PASS: TestOffline (428.61s)

                                                
                                    
x
+
TestAddons/Setup (138.82s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-020552 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p addons-020552 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m18.820016306s)
--- PASS: TestAddons/Setup (138.82s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:287: registry stabilized in 7.986257ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-5l7g6" [b92eccbd-0ae3-4688-bf30-edb7d4ab9684] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010328512s
addons_test.go:292: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-bwg7v" [7bfa8091-7e86-4333-9560-f1fb0ebd32ff] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:292: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005923648s
addons_test.go:297: (dbg) Run:  kubectl --context addons-020552 delete po -l run=registry-test --now
addons_test.go:302: (dbg) Run:  kubectl --context addons-020552 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:302: (dbg) Done: kubectl --context addons-020552 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.92569268s)
addons_test.go:316: (dbg) Run:  out/minikube-darwin-amd64 -p addons-020552 ip
2023/01/14 02:08:27 [DEBUG] GET http://192.168.64.2:5000
addons_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p addons-020552 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.53s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:169: (dbg) Run:  kubectl --context addons-020552 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:189: (dbg) Run:  kubectl --context addons-020552 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:202: (dbg) Run:  kubectl --context addons-020552 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:207: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [c8e87ec8-ca8d-4542-ab71-fc7a41e50610] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [c8e87ec8-ca8d-4542-ab71-fc7a41e50610] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.008721628s
addons_test.go:219: (dbg) Run:  out/minikube-darwin-amd64 -p addons-020552 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:243: (dbg) Run:  kubectl --context addons-020552 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p addons-020552 ip
addons_test.go:254: (dbg) Run:  nslookup hello-john.test 192.168.64.2
addons_test.go:263: (dbg) Run:  out/minikube-darwin-amd64 -p addons-020552 addons disable ingress-dns --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:268: (dbg) Run:  out/minikube-darwin-amd64 -p addons-020552 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:268: (dbg) Done: out/minikube-darwin-amd64 -p addons-020552 addons disable ingress --alsologtostderr -v=1: (7.404179669s)
--- PASS: TestAddons/parallel/Ingress (19.75s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:364: metrics-server stabilized in 1.778191ms
addons_test.go:366: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-56c6cfbdd9-2pjnt" [f08a4810-93b3-472e-ad1c-4a93cde46c82] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.053758714s
addons_test.go:372: (dbg) Run:  kubectl --context addons-020552 top pods -n kube-system
addons_test.go:389: (dbg) Run:  out/minikube-darwin-amd64 -p addons-020552 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.48s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.57s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:413: tiller-deploy stabilized in 2.006835ms
addons_test.go:415: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-696b5bfbb7-z924q" [afc5eeff-d7a6-4c14-a6ae-8736d54a4e00] Running
addons_test.go:415: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.007165677s
addons_test.go:430: (dbg) Run:  kubectl --context addons-020552 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:430: (dbg) Done: kubectl --context addons-020552 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.230974004s)
addons_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p addons-020552 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.57s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:518: csi-hostpath-driver pods stabilized in 4.086481ms
addons_test.go:521: (dbg) Run:  kubectl --context addons-020552 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:526: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-020552 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:531: (dbg) Run:  kubectl --context addons-020552 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:536: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [8759e170-785c-467a-95de-3c830d56ea6f] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [8759e170-785c-467a-95de-3c830d56ea6f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [8759e170-785c-467a-95de-3c830d56ea6f] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:536: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 20.01223276s
addons_test.go:541: (dbg) Run:  kubectl --context addons-020552 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:546: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-020552 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-020552 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:551: (dbg) Run:  kubectl --context addons-020552 delete pod task-pv-pod
addons_test.go:551: (dbg) Done: kubectl --context addons-020552 delete pod task-pv-pod: (1.174735989s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-020552 delete pvc hpvc
addons_test.go:563: (dbg) Run:  kubectl --context addons-020552 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-020552 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-020552 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [52a1c0b8-442d-4d0a-9529-e3932e027e21] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [52a1c0b8-442d-4d0a-9529-e3932e027e21] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [52a1c0b8-442d-4d0a-9529-e3932e027e21] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 13.00840992s
addons_test.go:583: (dbg) Run:  kubectl --context addons-020552 delete pod task-pv-pod-restore
addons_test.go:583: (dbg) Done: kubectl --context addons-020552 delete pod task-pv-pod-restore: (1.051982308s)
addons_test.go:587: (dbg) Run:  kubectl --context addons-020552 delete pvc hpvc-restore
addons_test.go:591: (dbg) Run:  kubectl --context addons-020552 delete volumesnapshot new-snapshot-demo
addons_test.go:595: (dbg) Run:  out/minikube-darwin-amd64 -p addons-020552 addons disable csi-hostpath-driver --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:595: (dbg) Done: out/minikube-darwin-amd64 -p addons-020552 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.561678014s)
addons_test.go:599: (dbg) Run:  out/minikube-darwin-amd64 -p addons-020552 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (44.56s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:774: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-020552 --alsologtostderr -v=1
addons_test.go:774: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-020552 --alsologtostderr -v=1: (1.248724203s)
addons_test.go:779: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-764769c887-wrh2z" [0370bf12-934b-40b7-a493-45a815f005c8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-764769c887-wrh2z" [0370bf12-934b-40b7-a493-45a815f005c8] Running
addons_test.go:779: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.00872779s
--- PASS: TestAddons/parallel/Headlamp (10.26s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.33s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:795: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
helpers_test.go:342: "cloud-spanner-emulator-7d7766f55c-qkmht" [ed96f9b3-31f0-485a-b042-bfa52a334fbb] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:795: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008796625s
addons_test.go:798: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-020552
--- PASS: TestAddons/parallel/CloudSpanner (5.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:607: (dbg) Run:  kubectl --context addons-020552 create ns new-namespace
addons_test.go:621: (dbg) Run:  kubectl --context addons-020552 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (3.61s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:139: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-020552
addons_test.go:139: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-020552: (3.237376533s)
addons_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-020552
addons_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-020552
--- PASS: TestAddons/StoppedEnableDisable (3.61s)

                                                
                                    
x
+
TestCertOptions (41.05s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-031917 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
E0114 03:19:18.282645    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kindnet-025507/client.crt: no such file or directory
E0114 03:19:18.288026    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kindnet-025507/client.crt: no such file or directory
E0114 03:19:18.299522    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kindnet-025507/client.crt: no such file or directory
E0114 03:19:18.320723    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kindnet-025507/client.crt: no such file or directory
E0114 03:19:18.362487    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kindnet-025507/client.crt: no such file or directory
E0114 03:19:18.443471    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kindnet-025507/client.crt: no such file or directory
E0114 03:19:18.603626    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kindnet-025507/client.crt: no such file or directory
E0114 03:19:18.923809    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kindnet-025507/client.crt: no such file or directory
E0114 03:19:19.564218    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kindnet-025507/client.crt: no such file or directory
E0114 03:19:20.845466    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kindnet-025507/client.crt: no such file or directory
E0114 03:19:22.025342    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/calico-025507/client.crt: no such file or directory
E0114 03:19:23.405700    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kindnet-025507/client.crt: no such file or directory
E0114 03:19:23.691640    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/cilium-025507/client.crt: no such file or directory
E0114 03:19:28.526386    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kindnet-025507/client.crt: no such file or directory
E0114 03:19:34.089363    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 03:19:38.767508    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kindnet-025507/client.crt: no such file or directory
E0114 03:19:42.505894    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/calico-025507/client.crt: no such file or directory
E0114 03:19:51.404637    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/cilium-025507/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-031917 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : (37.329012971s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-031917 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
E0114 03:19:54.460366    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-031917 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-031917 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-031917" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-031917
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-031917: (3.367863145s)
--- PASS: TestCertOptions (41.05s)

                                                
                                    
x
+
TestCertExpiration (255.49s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-031810 --memory=2048 --cert-expiration=3m --driver=hyperkit 
E0114 03:18:10.997806    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 03:18:16.943752    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
E0114 03:18:26.028210    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/false-025507/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-031810 --memory=2048 --cert-expiration=3m --driver=hyperkit : (42.089833959s)
E0114 03:19:01.539092    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/calico-025507/client.crt: no such file or directory
E0114 03:19:01.545414    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/calico-025507/client.crt: no such file or directory
E0114 03:19:01.556507    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/calico-025507/client.crt: no such file or directory
E0114 03:19:01.576676    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/calico-025507/client.crt: no such file or directory
E0114 03:19:01.616897    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/calico-025507/client.crt: no such file or directory
E0114 03:19:01.697510    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/calico-025507/client.crt: no such file or directory
E0114 03:19:01.858391    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/calico-025507/client.crt: no such file or directory
E0114 03:19:02.180406    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/calico-025507/client.crt: no such file or directory
E0114 03:19:02.821930    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/calico-025507/client.crt: no such file or directory
E0114 03:19:04.102135    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/calico-025507/client.crt: no such file or directory
E0114 03:19:06.662696    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/calico-025507/client.crt: no such file or directory
E0114 03:19:06.988417    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/false-025507/client.crt: no such file or directory
E0114 03:19:11.783327    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/calico-025507/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-031810 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E0114 03:21:56.661259    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kubenet-025507/client.crt: no such file or directory
E0114 03:21:56.667067    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kubenet-025507/client.crt: no such file or directory
E0114 03:21:56.677331    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kubenet-025507/client.crt: no such file or directory
E0114 03:21:56.697913    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kubenet-025507/client.crt: no such file or directory
E0114 03:21:56.712125    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/enable-default-cni-025507/client.crt: no such file or directory
E0114 03:21:56.739141    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kubenet-025507/client.crt: no such file or directory
E0114 03:21:56.819833    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kubenet-025507/client.crt: no such file or directory
E0114 03:21:56.981305    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kubenet-025507/client.crt: no such file or directory
E0114 03:21:57.301758    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kubenet-025507/client.crt: no such file or directory
E0114 03:21:57.944056    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kubenet-025507/client.crt: no such file or directory
E0114 03:21:59.224472    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kubenet-025507/client.crt: no such file or directory
E0114 03:22:00.732470    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/bridge-025507/client.crt: no such file or directory
E0114 03:22:01.785765    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kubenet-025507/client.crt: no such file or directory
E0114 03:22:02.131460    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kindnet-025507/client.crt: no such file or directory
E0114 03:22:04.700850    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:22:06.906176    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kubenet-025507/client.crt: no such file or directory
E0114 03:22:17.147691    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kubenet-025507/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-031810 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (28.115092375s)
helpers_test.go:175: Cleaning up "cert-expiration-031810" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-031810
E0114 03:22:21.213229    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/bridge-025507/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-031810: (5.28227834s)
--- PASS: TestCertExpiration (255.49s)

                                                
                                    
x
+
TestDockerFlags (46.41s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-031830 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-031830 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : (42.4792336s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-031830 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-031830 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-031830" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-031830
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-031830: (3.610030639s)
--- PASS: TestDockerFlags (46.41s)

                                                
                                    
x
+
TestForceSystemdFlag (43.63s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-031657 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-031657 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : (38.187038825s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-031657 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-031657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-031657

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-031657: (5.27395143s)
--- PASS: TestForceSystemdFlag (43.63s)

                                                
                                    
x
+
TestForceSystemdEnv (41.45s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-031749 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
E0114 03:17:50.184160    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/false-025507/client.crt: no such file or directory

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-031749 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : (37.884520877s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-031749 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-031749" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-031749
E0114 03:18:28.591378    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/custom-flannel-025507/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-031749: (3.371421287s)
--- PASS: TestForceSystemdEnv (41.45s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.36s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.36s)

                                                
                                    
x
+
TestErrorSpam/setup (37.8s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-020932 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-020932 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-020932 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-020932 --driver=hyperkit : (37.803995519s)
--- PASS: TestErrorSpam/setup (37.80s)

                                                
                                    
x
+
TestErrorSpam/start (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-020932 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-020932 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-020932 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-020932 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-020932 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-020932 start --dry-run
--- PASS: TestErrorSpam/start (1.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.5s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-020932 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-020932 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-020932 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-020932 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-020932 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-020932 status
--- PASS: TestErrorSpam/status (0.50s)

                                                
                                    
x
+
TestErrorSpam/pause (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-020932 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-020932 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-020932 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-020932 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-020932 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-020932 pause
--- PASS: TestErrorSpam/pause (1.33s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-020932 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-020932 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-020932 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-020932 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-020932 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-020932 unpause
--- PASS: TestErrorSpam/unpause (1.34s)

                                                
                                    
x
+
TestErrorSpam/stop (3.68s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-020932 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-020932 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-020932 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-020932 stop: (3.235406358s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-020932 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-020932 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-020932 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-020932 stop
--- PASS: TestErrorSpam/stop (3.68s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /Users/jenkins/minikube-integration/15642-1627/.minikube/files/etc/test/nested/copy/2917/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (56.12s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-021019 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
functional_test.go:2161: (dbg) Done: out/minikube-darwin-amd64 start -p functional-021019 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (56.121056476s)
--- PASS: TestFunctional/serial/StartWithProxy (56.12s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.59s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-021019 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-darwin-amd64 start -p functional-021019 --alsologtostderr -v=8: (39.590975689s)
functional_test.go:656: soft start took 39.591554364s for "functional-021019" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.59s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-021019 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (8.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-021019 cache add k8s.gcr.io/pause:3.1: (3.238093046s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-021019 cache add k8s.gcr.io/pause:3.3: (2.945194688s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-021019 cache add k8s.gcr.io/pause:latest: (2.582793154s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (8.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-021019 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1724264153/001
functional_test.go:1082: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 cache add minikube-local-cache-test:functional-021019
functional_test.go:1087: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 cache delete minikube-local-cache-test:functional-021019
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-021019
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-021019 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (140.508911ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 cache reload
functional_test.go:1151: (dbg) Done: out/minikube-darwin-amd64 -p functional-021019 cache reload: (1.579300104s)
functional_test.go:1156: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 kubectl -- --context functional-021019 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.50s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.68s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-021019 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.68s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (58.38s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-021019 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:750: (dbg) Done: out/minikube-darwin-amd64 start -p functional-021019 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (58.384393228s)
functional_test.go:754: restart took 58.384519788s for "functional-021019" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (58.38s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-021019 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 logs
functional_test.go:1229: (dbg) Done: out/minikube-darwin-amd64 -p functional-021019 logs: (2.635628387s)
--- PASS: TestFunctional/serial/LogsCmd (2.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd2354534537/001/logs.txt
E0114 02:13:10.880334    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 02:13:10.924241    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 02:13:10.935727    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 02:13:10.957197    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 02:13:10.998923    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 02:13:11.079499    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 02:13:11.239968    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 02:13:11.561368    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 02:13:12.201584    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
functional_test.go:1243: (dbg) Done: out/minikube-darwin-amd64 -p functional-021019 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd2354534537/001/logs.txt: (2.771802041s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-021019 config get cpus: exit status 14 (72.461861ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 config set cpus 2
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 config unset cpus
E0114 02:13:13.482011    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-021019 config get cpus: exit status 14 (57.607366ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-021019 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-021019 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 4392: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.40s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-021019 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:967: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-021019 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (479.84614ms)

                                                
                                                
-- stdout --
	* [functional-021019] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1627/.minikube
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 02:14:10.770178    4356 out.go:296] Setting OutFile to fd 1 ...
	I0114 02:14:10.770338    4356 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:14:10.770345    4356 out.go:309] Setting ErrFile to fd 2...
	I0114 02:14:10.770349    4356 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:14:10.770464    4356 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1627/.minikube/bin
	I0114 02:14:10.771016    4356 out.go:303] Setting JSON to false
	I0114 02:14:10.789723    4356 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":823,"bootTime":1673690427,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0114 02:14:10.789834    4356 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 02:14:10.811692    4356 out.go:177] * [functional-021019] minikube v1.28.0 on Darwin 13.0.1
	I0114 02:14:10.854133    4356 notify.go:220] Checking for updates...
	I0114 02:14:10.876175    4356 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 02:14:10.899228    4356 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1627/kubeconfig
	I0114 02:14:10.920131    4356 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0114 02:14:10.941309    4356 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 02:14:10.962445    4356 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1627/.minikube
	I0114 02:14:10.984958    4356 config.go:180] Loaded profile config "functional-021019": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:14:10.985628    4356 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 02:14:10.985735    4356 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 02:14:10.993240    4356 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:50422
	I0114 02:14:11.000504    4356 main.go:134] libmachine: () Calling .GetVersion
	I0114 02:14:11.000923    4356 main.go:134] libmachine: Using API Version  1
	I0114 02:14:11.000936    4356 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 02:14:11.001142    4356 main.go:134] libmachine: () Calling .GetMachineName
	I0114 02:14:11.001248    4356 main.go:134] libmachine: (functional-021019) Calling .DriverName
	I0114 02:14:11.001370    4356 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 02:14:11.001626    4356 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 02:14:11.001646    4356 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 02:14:11.008465    4356 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:50424
	I0114 02:14:11.008834    4356 main.go:134] libmachine: () Calling .GetVersion
	I0114 02:14:11.009134    4356 main.go:134] libmachine: Using API Version  1
	I0114 02:14:11.009149    4356 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 02:14:11.009339    4356 main.go:134] libmachine: () Calling .GetMachineName
	I0114 02:14:11.009433    4356 main.go:134] libmachine: (functional-021019) Calling .DriverName
	I0114 02:14:11.037320    4356 out.go:177] * Using the hyperkit driver based on existing profile
	I0114 02:14:11.080102    4356 start.go:294] selected driver: hyperkit
	I0114 02:14:11.080162    4356 start.go:838] validating driver "hyperkit" against &{Name:functional-021019 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-021019 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.64.4 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-serv
er:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:14:11.080423    4356 start.go:849] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 02:14:11.107169    4356 out.go:177] 
	W0114 02:14:11.128279    4356 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0114 02:14:11.149028    4356 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-021019 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-021019 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-021019 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (481.497761ms)

                                                
                                                
-- stdout --
	* [functional-021019] minikube v1.28.0 sur Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1627/.minikube
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 02:14:11.667641    4372 out.go:296] Setting OutFile to fd 1 ...
	I0114 02:14:11.667816    4372 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:14:11.667823    4372 out.go:309] Setting ErrFile to fd 2...
	I0114 02:14:11.667827    4372 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:14:11.667950    4372 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1627/.minikube/bin
	I0114 02:14:11.668444    4372 out.go:303] Setting JSON to false
	I0114 02:14:11.687759    4372 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":824,"bootTime":1673690427,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0114 02:14:11.687845    4372 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 02:14:11.709107    4372 out.go:177] * [functional-021019] minikube v1.28.0 sur Darwin 13.0.1
	I0114 02:14:11.767395    4372 notify.go:220] Checking for updates...
	I0114 02:14:11.788879    4372 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 02:14:11.810118    4372 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1627/kubeconfig
	I0114 02:14:11.831103    4372 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0114 02:14:11.852018    4372 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 02:14:11.873193    4372 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1627/.minikube
	I0114 02:14:11.895799    4372 config.go:180] Loaded profile config "functional-021019": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:14:11.896528    4372 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 02:14:11.896583    4372 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 02:14:11.909188    4372 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:50436
	I0114 02:14:11.909575    4372 main.go:134] libmachine: () Calling .GetVersion
	I0114 02:14:11.909986    4372 main.go:134] libmachine: Using API Version  1
	I0114 02:14:11.909997    4372 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 02:14:11.910196    4372 main.go:134] libmachine: () Calling .GetMachineName
	I0114 02:14:11.910302    4372 main.go:134] libmachine: (functional-021019) Calling .DriverName
	I0114 02:14:11.910423    4372 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 02:14:11.910684    4372 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 02:14:11.910705    4372 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 02:14:11.917437    4372 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:50438
	I0114 02:14:11.917790    4372 main.go:134] libmachine: () Calling .GetVersion
	I0114 02:14:11.918106    4372 main.go:134] libmachine: Using API Version  1
	I0114 02:14:11.918120    4372 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 02:14:11.918362    4372 main.go:134] libmachine: () Calling .GetMachineName
	I0114 02:14:11.918476    4372 main.go:134] libmachine: (functional-021019) Calling .DriverName
	I0114 02:14:11.945936    4372 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0114 02:14:11.987847    4372 start.go:294] selected driver: hyperkit
	I0114 02:14:11.987862    4372 start.go:838] validating driver "hyperkit" against &{Name:functional-021019 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-021019 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.64.4 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-serv
er:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:14:11.988010    4372 start.go:849] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 02:14:12.011748    4372 out.go:177] 
	W0114 02:14:12.033204    4372 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0114 02:14:12.055065    4372 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:853: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:865: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (13.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-021019 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-021019 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-n8kp5" [0c37789a-21c8-4419-a14a-1a0119869a2e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-n8kp5" [0c37789a-21c8-4419-a14a-1a0119869a2e] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 12.009837896s
functional_test.go:1449: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 service list
functional_test.go:1463: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 service --namespace=default --https --url hello-node
functional_test.go:1476: found endpoint: https://192.168.64.4:30211
functional_test.go:1491: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 service hello-node --url --format={{.IP}}
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 service hello-node --url
functional_test.go:1511: found endpoint for hello-node: http://192.168.64.4:30211
--- PASS: TestFunctional/parallel/ServiceCmd (13.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-021019 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-021019 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-6458c8fb6f-jbvb9" [e90570b7-a149-476e-8fed-78485c35602d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E0114 02:13:51.886671    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
helpers_test.go:342: "hello-node-connect-6458c8fb6f-jbvb9" [e90570b7-a149-476e-8fed-78485c35602d] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.007820155s
functional_test.go:1579: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 service hello-node-connect --url
functional_test.go:1585: found endpoint for hello-node-connect: http://192.168.64.4:32493
functional_test.go:1605: http://192.168.64.4:32493: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6458c8fb6f-jbvb9

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=172.17.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.64.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.64.4:32493
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 addons list
functional_test.go:1632: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [a41a1488-d580-432a-a34c-d30bd47844eb] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005534028s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-021019 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-021019 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-021019 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-021019 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [c7393f56-53ba-4b05-8406-16b646ea4710] Pending
helpers_test.go:342: "sp-pod" [c7393f56-53ba-4b05-8406-16b646ea4710] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [c7393f56-53ba-4b05-8406-16b646ea4710] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.013375928s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-021019 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-021019 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-021019 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [cd1ad85b-07b2-491c-b1a2-385c7f4a4bd0] Pending
helpers_test.go:342: "sp-pod" [cd1ad85b-07b2-491c-b1a2-385c7f4a4bd0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [cd1ad85b-07b2-491c-b1a2-385c7f4a4bd0] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.007709413s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-021019 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.00s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh "echo hello"
functional_test.go:1672: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh -n functional-021019 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 cp functional-021019:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd2317164451/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh -n functional-021019 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-021019 replace --force -f testdata/mysql.yaml
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-xkr2s" [fb95c7e1-dbc9-4010-a1b4-70499f8899ed] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E0114 02:13:21.164899    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-xkr2s" [fb95c7e1-dbc9-4010-a1b4-70499f8899ed] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.020990383s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-021019 exec mysql-596b7fcdbf-xkr2s -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-021019 exec mysql-596b7fcdbf-xkr2s -- mysql -ppassword -e "show databases;": exit status 1 (139.252391ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-021019 exec mysql-596b7fcdbf-xkr2s -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-021019 exec mysql-596b7fcdbf-xkr2s -- mysql -ppassword -e "show databases;": exit status 1 (116.848111ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-021019 exec mysql-596b7fcdbf-xkr2s -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.46s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/2917/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh "sudo cat /etc/test/nested/copy/2917/hosts"
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/2917.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh "sudo cat /etc/ssl/certs/2917.pem"
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/2917.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh "sudo cat /usr/share/ca-certificates/2917.pem"
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/29172.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh "sudo cat /etc/ssl/certs/29172.pem"
E0114 02:13:16.043221    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/29172.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh "sudo cat /usr/share/ca-certificates/29172.pem"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-021019 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh "sudo systemctl is-active crio"
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-021019 ssh "sudo systemctl is-active crio": exit status 1 (129.01894ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 image ls --format short
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-021019 image ls --format short:
registry.k8s.io/pause:3.8
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-021019
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-021019
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 image ls --format table
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-021019 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/minikube-local-cache-test | functional-021019 | fc1cad7bf8bb3 | 30B    |
| registry.k8s.io/pause                       | 3.8               | 4873874c08efc | 711kB  |
| k8s.gcr.io/pause                            | 3.6               | 6270bb605e12e | 683kB  |
| gcr.io/google-containers/addon-resizer      | functional-021019 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/mysql                     | 5.7               | d410f4167eea9 | 495MB  |
| registry.k8s.io/kube-controller-manager     | v1.25.3           | 6039992312758 | 117MB  |
| registry.k8s.io/kube-apiserver              | v1.25.3           | 0346dbd74bcb9 | 128MB  |
| registry.k8s.io/kube-scheduler              | v1.25.3           | 6d23ec0e8b87e | 50.6MB |
| registry.k8s.io/kube-proxy                  | v1.25.3           | beaaf00edd38a | 61.7MB |
| registry.k8s.io/etcd                        | 3.5.4-0           | a8a176a5d5d69 | 300MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/localhost/my-image                | functional-021019 | 442a7276c1ac6 | 1.24MB |
| docker.io/library/nginx                     | latest            | a99a39d070bfd | 142MB  |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | alpine            | c433c51bbd661 | 40.7MB |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
|---------------------------------------------|-------------------|---------------|--------|
2023/01/14 02:14:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 image ls --format json
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-021019 image ls --format json:
[{"id":"a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"d410f4167eea912908b2f9bcc24eff870cb3c131dfb755088b79a4188bfeb40f","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"495000000"},{"id":"6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.25.3"],"size":"50600000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.25.3"],"size":"61700000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"442a7276c1ac69040e5673fdc909d12c9413157f07e7fc86a544a387637809a0","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-021019"],"size":"1240000"},{"id":"fc1cad7bf8bb303a72ace3417685c2fd73581ce612e374d19e735acf50c82b05","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-021019"],"size":"30"},{"id":"c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.25.3"],"size":"117000000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["
registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-021019"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.25.3"],"size":"128000000"},{"id":"4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.8"],"size":"711000"},{"id":"a8a176a5d5d698f9409d
c246f81fa69d37d4a2f4132ba5e62e72a78476b27f66","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.4-0"],"size":"300000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 image ls --format yaml
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-021019 image ls --format yaml:
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-021019
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.8
size: "711000"
- id: a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.4-0
size: "300000000"
- id: beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.25.3
size: "61700000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.25.3
size: "50600000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.25.3
size: "128000000"
- id: 60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.25.3
size: "117000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: fc1cad7bf8bb303a72ace3417685c2fd73581ce612e374d19e735acf50c82b05
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-021019
size: "30"
- id: d410f4167eea912908b2f9bcc24eff870cb3c131dfb755088b79a4188bfeb40f
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "495000000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh pgrep buildkitd
functional_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-021019 ssh pgrep buildkitd: exit status 1 (129.139166ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 image build -t localhost/my-image:functional-021019 testdata/build
functional_test.go:311: (dbg) Done: out/minikube-darwin-amd64 -p functional-021019 image build -t localhost/my-image:functional-021019 testdata/build: (3.724752673s)
functional_test.go:316: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-021019 image build -t localhost/my-image:functional-021019 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 3107d7f60b22
Removing intermediate container 3107d7f60b22
---> 25a9e8536da4
Step 3/3 : ADD content.txt /
---> 442a7276c1ac
Successfully built 442a7276c1ac
Successfully tagged localhost/my-image:functional-021019
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.493379777s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-021019
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.56s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-021019 docker-env) && out/minikube-darwin-amd64 status -p functional-021019"
functional_test.go:515: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-021019 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 image load --daemon gcr.io/google-containers/addon-resizer:functional-021019

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-darwin-amd64 -p functional-021019 image load --daemon gcr.io/google-containers/addon-resizer:functional-021019: (4.058442748s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 image load --daemon gcr.io/google-containers/addon-resizer:functional-021019
functional_test.go:361: (dbg) Done: out/minikube-darwin-amd64 -p functional-021019 image load --daemon gcr.io/google-containers/addon-resizer:functional-021019: (2.045755318s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.104942355s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-021019
functional_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 image load --daemon gcr.io/google-containers/addon-resizer:functional-021019

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-darwin-amd64 -p functional-021019 image load --daemon gcr.io/google-containers/addon-resizer:functional-021019: (3.181049666s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 image save gcr.io/google-containers/addon-resizer:functional-021019 /Users/jenkins/workspace/addon-resizer-save.tar
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 image rm gcr.io/google-containers/addon-resizer:functional-021019
E0114 02:13:31.405440    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-021019
functional_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 image save --daemon gcr.io/google-containers/addon-resizer:functional-021019
functional_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p functional-021019 image save --daemon gcr.io/google-containers/addon-resizer:functional-021019: (2.240561368s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-021019
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-021019 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-021019 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [36d7db60-1899-4e52-81eb-3fe7f4d2d190] Pending
helpers_test.go:342: "nginx-svc" [36d7db60-1899-4e52-81eb-3fe7f4d2d190] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [36d7db60-1899-4e52-81eb-3fe7f4d2d190] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.008673364s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-021019 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.103.110.236 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:254: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:262: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:286: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:294: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:359: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-021019 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "210.492332ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "81.126831ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "209.459092ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "81.507134ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-021019 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1261679749/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1673691241100818000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1261679749/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1673691241100818000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1261679749/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1673691241100818000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1261679749/001/test-1673691241100818000
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-021019 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (157.319969ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh -- ls -la /mount-9p
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 14 10:14 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 14 10:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 14 10:14 test-1673691241100818000
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh cat /mount-9p/test-1673691241100818000
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-021019 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [10d70cad-10a6-4618-8e6b-669e40315ee5] Pending
helpers_test.go:342: "busybox-mount" [10d70cad-10a6-4618-8e6b-669e40315ee5] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [10d70cad-10a6-4618-8e6b-669e40315ee5] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [10d70cad-10a6-4618-8e6b-669e40315ee5] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.009994816s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-021019 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-021019 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1261679749/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-021019 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2583823402/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-021019 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (157.458323ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-021019 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2583823402/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021019 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-021019 ssh "sudo umount -f /mount-9p": exit status 1 (128.579535ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-darwin-amd64 -p functional-021019 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-021019 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2583823402/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.37s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-021019
--- PASS: TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-021019
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-021019
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (120.09s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-021426 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit 
E0114 02:14:32.846635    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 02:15:54.784707    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-amd64 start -p ingress-addon-legacy-021426 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit : (2m0.087218239s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (120.09s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.74s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-021426 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-021426 addons enable ingress --alsologtostderr -v=5: (16.739409074s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.74s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.49s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-021426 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.49s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (46.86s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:169: (dbg) Run:  kubectl --context ingress-addon-legacy-021426 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:169: (dbg) Done: kubectl --context ingress-addon-legacy-021426 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.944464794s)
addons_test.go:189: (dbg) Run:  kubectl --context ingress-addon-legacy-021426 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:202: (dbg) Run:  kubectl --context ingress-addon-legacy-021426 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:207: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [e14463a3-f261-49b4-a71c-8b99466d0e63] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [e14463a3-f261-49b4-a71c-8b99466d0e63] Running
addons_test.go:207: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.00815056s
addons_test.go:219: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-021426 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:243: (dbg) Run:  kubectl --context ingress-addon-legacy-021426 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-021426 ip
addons_test.go:254: (dbg) Run:  nslookup hello-john.test 192.168.64.5
addons_test.go:263: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-021426 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:263: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-021426 addons disable ingress-dns --alsologtostderr -v=1: (12.781492739s)
addons_test.go:268: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-021426 addons disable ingress --alsologtostderr -v=1
addons_test.go:268: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-021426 addons disable ingress --alsologtostderr -v=1: (7.216143988s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (46.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53.64s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-021735 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E0114 02:18:10.914353    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 02:18:16.862550    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
E0114 02:18:16.867642    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
E0114 02:18:16.877742    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
E0114 02:18:16.898353    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
E0114 02:18:16.940340    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
E0114 02:18:17.021375    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
E0114 02:18:17.183487    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
E0114 02:18:17.505700    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
E0114 02:18:18.146601    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
E0114 02:18:19.428166    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
E0114 02:18:21.989865    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
E0114 02:18:27.111151    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-021735 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (53.635482828s)
--- PASS: TestJSONOutput/start/Command (53.64s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.5s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-021735 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.50s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-021735 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.16s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-021735 --output=json --user=testUser
E0114 02:18:37.351511    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-021735 --output=json --user=testUser: (8.163196199s)
--- PASS: TestJSONOutput/stop/Command (8.16s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.74s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-021838 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-021838 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (335.346775ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8ca74cbe-d739-48b8-80f7-8b962f2e2f72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-021838] minikube v1.28.0 on Darwin 13.0.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"90d02831-83ac-4298-a8b4-c2ae477b1e3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15642"}}
	{"specversion":"1.0","id":"d46898c1-fec7-4ec2-b9e7-d0aef1903848","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15642-1627/kubeconfig"}}
	{"specversion":"1.0","id":"ec4dee09-b6d6-49b8-9302-8667abb8191c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"fb6c3c70-6992-47c9-b6df-65ec88c07ab1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bbfbeb9d-479c-41f2-8709-d03091cab30e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1627/.minikube"}}
	{"specversion":"1.0","id":"d0c17021-594a-4c41-bec1-6f791fa70f89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-021838" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-021838
--- PASS: TestErrorJSONOutput (0.74s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (90.97s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-021839 --driver=hyperkit 
E0114 02:18:57.831630    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-021839 --driver=hyperkit : (38.215993235s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-021839 --driver=hyperkit 
E0114 02:19:38.793106    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-021839 --driver=hyperkit : (43.064846585s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-021839
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-021839
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-021839" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-021839
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-021839: (3.44261629s)
helpers_test.go:175: Cleaning up "first-021839" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-021839
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-021839: (5.287377614s)
--- PASS: TestMinikubeProfile (90.97s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (15.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-022010 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-022010 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : (14.152591111s)
--- PASS: TestMountStart/serial/StartWithMountFirst (15.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-022010 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-022010 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (14.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-022010 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-022010 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit : (13.472972864s)
--- PASS: TestMountStart/serial/StartWithMountSecond (14.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-022010 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-022010 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.37s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-022010 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-022010 --alsologtostderr -v=5: (2.373369098s)
--- PASS: TestMountStart/serial/DeleteFirst (2.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-022010 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-022010 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-022010
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-022010: (2.236618764s)
--- PASS: TestMountStart/serial/Stop (2.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (16.44s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-022010
E0114 02:21:00.712724    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-022010: (15.436891615s)
--- PASS: TestMountStart/serial/RestartStopped (16.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-022010 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-022010 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (136.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-022105 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0114 02:21:43.428564    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 02:21:43.434665    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 02:21:43.444875    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 02:21:43.465963    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 02:21:43.508032    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 02:21:43.590214    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 02:21:43.751699    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 02:21:44.072454    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 02:21:44.714689    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 02:21:45.996285    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 02:21:48.558471    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 02:21:53.679707    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 02:22:03.920971    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 02:22:24.400906    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 02:23:05.362135    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 02:23:10.911452    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 02:23:16.858189    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-022105 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (2m16.698968807s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (136.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022105 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022105 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-022105 -- rollout status deployment/busybox: (3.75007256s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022105 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022105 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022105 -- exec busybox-65db55d5d6-blh4m -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022105 -- exec busybox-65db55d5d6-vpc6w -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022105 -- exec busybox-65db55d5d6-blh4m -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022105 -- exec busybox-65db55d5d6-vpc6w -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022105 -- exec busybox-65db55d5d6-blh4m -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022105 -- exec busybox-65db55d5d6-vpc6w -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.41s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022105 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022105 -- exec busybox-65db55d5d6-blh4m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022105 -- exec busybox-65db55d5d6-blh4m -- sh -c "ping -c 1 192.168.64.1"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022105 -- exec busybox-65db55d5d6-vpc6w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022105 -- exec busybox-65db55d5d6-vpc6w -- sh -c "ping -c 1 192.168.64.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-022105 -v 3 --alsologtostderr
E0114 02:23:44.550571    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-022105 -v 3 --alsologtostderr: (40.869429307s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.18s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 cp testdata/cp-test.txt multinode-022105:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 ssh -n multinode-022105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 cp multinode-022105:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile753708160/001/cp-test_multinode-022105.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 ssh -n multinode-022105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 cp multinode-022105:/home/docker/cp-test.txt multinode-022105-m02:/home/docker/cp-test_multinode-022105_multinode-022105-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 ssh -n multinode-022105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 ssh -n multinode-022105-m02 "sudo cat /home/docker/cp-test_multinode-022105_multinode-022105-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 cp multinode-022105:/home/docker/cp-test.txt multinode-022105-m03:/home/docker/cp-test_multinode-022105_multinode-022105-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 ssh -n multinode-022105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 ssh -n multinode-022105-m03 "sudo cat /home/docker/cp-test_multinode-022105_multinode-022105-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 cp testdata/cp-test.txt multinode-022105-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 ssh -n multinode-022105-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 cp multinode-022105-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile753708160/001/cp-test_multinode-022105-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 ssh -n multinode-022105-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 cp multinode-022105-m02:/home/docker/cp-test.txt multinode-022105:/home/docker/cp-test_multinode-022105-m02_multinode-022105.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 ssh -n multinode-022105-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 ssh -n multinode-022105 "sudo cat /home/docker/cp-test_multinode-022105-m02_multinode-022105.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 cp multinode-022105-m02:/home/docker/cp-test.txt multinode-022105-m03:/home/docker/cp-test_multinode-022105-m02_multinode-022105-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 ssh -n multinode-022105-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 ssh -n multinode-022105-m03 "sudo cat /home/docker/cp-test_multinode-022105-m02_multinode-022105-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 cp testdata/cp-test.txt multinode-022105-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 ssh -n multinode-022105-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 cp multinode-022105-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile753708160/001/cp-test_multinode-022105-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 ssh -n multinode-022105-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 cp multinode-022105-m03:/home/docker/cp-test.txt multinode-022105:/home/docker/cp-test_multinode-022105-m03_multinode-022105.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 ssh -n multinode-022105-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 ssh -n multinode-022105 "sudo cat /home/docker/cp-test_multinode-022105-m03_multinode-022105.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 cp multinode-022105-m03:/home/docker/cp-test.txt multinode-022105-m02:/home/docker/cp-test_multinode-022105-m03_multinode-022105-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 ssh -n multinode-022105-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 ssh -n multinode-022105-m02 "sudo cat /home/docker/cp-test_multinode-022105-m03_multinode-022105-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-022105 node stop m03: (2.196140411s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-022105 status: exit status 7 (246.665912ms)

                                                
                                                
-- stdout --
	multinode-022105
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-022105-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-022105-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-022105 status --alsologtostderr: exit status 7 (244.234205ms)

                                                
                                                
-- stdout --
	multinode-022105
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-022105-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-022105-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 02:24:17.730582    5553 out.go:296] Setting OutFile to fd 1 ...
	I0114 02:24:17.730753    5553 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:24:17.730759    5553 out.go:309] Setting ErrFile to fd 2...
	I0114 02:24:17.730763    5553 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:24:17.730880    5553 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1627/.minikube/bin
	I0114 02:24:17.731087    5553 out.go:303] Setting JSON to false
	I0114 02:24:17.731111    5553 mustload.go:65] Loading cluster: multinode-022105
	I0114 02:24:17.731151    5553 notify.go:220] Checking for updates...
	I0114 02:24:17.731432    5553 config.go:180] Loaded profile config "multinode-022105": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:24:17.731441    5553 status.go:255] checking status of multinode-022105 ...
	I0114 02:24:17.731804    5553 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 02:24:17.731854    5553 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 02:24:17.738466    5553 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51416
	I0114 02:24:17.738830    5553 main.go:134] libmachine: () Calling .GetVersion
	I0114 02:24:17.739194    5553 main.go:134] libmachine: Using API Version  1
	I0114 02:24:17.739206    5553 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 02:24:17.739395    5553 main.go:134] libmachine: () Calling .GetMachineName
	I0114 02:24:17.739495    5553 main.go:134] libmachine: (multinode-022105) Calling .GetState
	I0114 02:24:17.739574    5553 main.go:134] libmachine: (multinode-022105) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 02:24:17.739636    5553 main.go:134] libmachine: (multinode-022105) DBG | hyperkit pid from json: 5128
	I0114 02:24:17.740724    5553 status.go:330] multinode-022105 host status = "Running" (err=<nil>)
	I0114 02:24:17.740748    5553 host.go:66] Checking if "multinode-022105" exists ...
	I0114 02:24:17.740992    5553 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 02:24:17.741013    5553 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 02:24:17.747658    5553 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51418
	I0114 02:24:17.748028    5553 main.go:134] libmachine: () Calling .GetVersion
	I0114 02:24:17.748374    5553 main.go:134] libmachine: Using API Version  1
	I0114 02:24:17.748388    5553 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 02:24:17.748601    5553 main.go:134] libmachine: () Calling .GetMachineName
	I0114 02:24:17.748700    5553 main.go:134] libmachine: (multinode-022105) Calling .GetIP
	I0114 02:24:17.748783    5553 host.go:66] Checking if "multinode-022105" exists ...
	I0114 02:24:17.749044    5553 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 02:24:17.749067    5553 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 02:24:17.755928    5553 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51420
	I0114 02:24:17.756306    5553 main.go:134] libmachine: () Calling .GetVersion
	I0114 02:24:17.756608    5553 main.go:134] libmachine: Using API Version  1
	I0114 02:24:17.756620    5553 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 02:24:17.756832    5553 main.go:134] libmachine: () Calling .GetMachineName
	I0114 02:24:17.756932    5553 main.go:134] libmachine: (multinode-022105) Calling .DriverName
	I0114 02:24:17.757080    5553 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 02:24:17.757102    5553 main.go:134] libmachine: (multinode-022105) Calling .GetSSHHostname
	I0114 02:24:17.757170    5553 main.go:134] libmachine: (multinode-022105) Calling .GetSSHPort
	I0114 02:24:17.757245    5553 main.go:134] libmachine: (multinode-022105) Calling .GetSSHKeyPath
	I0114 02:24:17.757309    5553 main.go:134] libmachine: (multinode-022105) Calling .GetSSHUsername
	I0114 02:24:17.757391    5553 sshutil.go:53] new ssh client: &{IP:192.168.64.11 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/multinode-022105/id_rsa Username:docker}
	I0114 02:24:17.796022    5553 ssh_runner.go:195] Run: systemctl --version
	I0114 02:24:17.799425    5553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 02:24:17.808356    5553 kubeconfig.go:92] found "multinode-022105" server: "https://192.168.64.11:8443"
	I0114 02:24:17.808377    5553 api_server.go:165] Checking apiserver status ...
	I0114 02:24:17.808418    5553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 02:24:17.816578    5553 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1759/cgroup
	I0114 02:24:17.823537    5553 api_server.go:181] apiserver freezer: "6:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc354cbf8a762e741bc743d3ed0c30f0.slice/docker-4e292ce551e89421075d0001fe376d2ed514d3f76d64710253204ff1634ab400.scope"
	I0114 02:24:17.823599    5553 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc354cbf8a762e741bc743d3ed0c30f0.slice/docker-4e292ce551e89421075d0001fe376d2ed514d3f76d64710253204ff1634ab400.scope/freezer.state
	I0114 02:24:17.829976    5553 api_server.go:203] freezer state: "THAWED"
	I0114 02:24:17.829989    5553 api_server.go:252] Checking apiserver healthz at https://192.168.64.11:8443/healthz ...
	I0114 02:24:17.834273    5553 api_server.go:278] https://192.168.64.11:8443/healthz returned 200:
	ok
	I0114 02:24:17.834286    5553 status.go:421] multinode-022105 apiserver status = Running (err=<nil>)
	I0114 02:24:17.834294    5553 status.go:257] multinode-022105 status: &{Name:multinode-022105 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0114 02:24:17.834305    5553 status.go:255] checking status of multinode-022105-m02 ...
	I0114 02:24:17.834756    5553 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 02:24:17.834778    5553 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 02:24:17.841811    5553 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51424
	I0114 02:24:17.842163    5553 main.go:134] libmachine: () Calling .GetVersion
	I0114 02:24:17.842495    5553 main.go:134] libmachine: Using API Version  1
	I0114 02:24:17.842508    5553 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 02:24:17.842719    5553 main.go:134] libmachine: () Calling .GetMachineName
	I0114 02:24:17.842828    5553 main.go:134] libmachine: (multinode-022105-m02) Calling .GetState
	I0114 02:24:17.842917    5553 main.go:134] libmachine: (multinode-022105-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 02:24:17.842983    5553 main.go:134] libmachine: (multinode-022105-m02) DBG | hyperkit pid from json: 5195
	I0114 02:24:17.844101    5553 status.go:330] multinode-022105-m02 host status = "Running" (err=<nil>)
	I0114 02:24:17.844109    5553 host.go:66] Checking if "multinode-022105-m02" exists ...
	I0114 02:24:17.844390    5553 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 02:24:17.844419    5553 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 02:24:17.851246    5553 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51426
	I0114 02:24:17.851736    5553 main.go:134] libmachine: () Calling .GetVersion
	I0114 02:24:17.852065    5553 main.go:134] libmachine: Using API Version  1
	I0114 02:24:17.852080    5553 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 02:24:17.852324    5553 main.go:134] libmachine: () Calling .GetMachineName
	I0114 02:24:17.852464    5553 main.go:134] libmachine: (multinode-022105-m02) Calling .GetIP
	I0114 02:24:17.852555    5553 host.go:66] Checking if "multinode-022105-m02" exists ...
	I0114 02:24:17.852826    5553 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 02:24:17.852847    5553 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 02:24:17.859586    5553 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51428
	I0114 02:24:17.859931    5553 main.go:134] libmachine: () Calling .GetVersion
	I0114 02:24:17.860214    5553 main.go:134] libmachine: Using API Version  1
	I0114 02:24:17.860224    5553 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 02:24:17.860422    5553 main.go:134] libmachine: () Calling .GetMachineName
	I0114 02:24:17.860526    5553 main.go:134] libmachine: (multinode-022105-m02) Calling .DriverName
	I0114 02:24:17.860649    5553 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 02:24:17.860663    5553 main.go:134] libmachine: (multinode-022105-m02) Calling .GetSSHHostname
	I0114 02:24:17.860739    5553 main.go:134] libmachine: (multinode-022105-m02) Calling .GetSSHPort
	I0114 02:24:17.860816    5553 main.go:134] libmachine: (multinode-022105-m02) Calling .GetSSHKeyPath
	I0114 02:24:17.860889    5553 main.go:134] libmachine: (multinode-022105-m02) Calling .GetSSHUsername
	I0114 02:24:17.860965    5553 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/multinode-022105-m02/id_rsa Username:docker}
	I0114 02:24:17.898494    5553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 02:24:17.907574    5553 status.go:257] multinode-022105-m02 status: &{Name:multinode-022105-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0114 02:24:17.907590    5553 status.go:255] checking status of multinode-022105-m03 ...
	I0114 02:24:17.907878    5553 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 02:24:17.907902    5553 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 02:24:17.915105    5553 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51431
	I0114 02:24:17.915489    5553 main.go:134] libmachine: () Calling .GetVersion
	I0114 02:24:17.915943    5553 main.go:134] libmachine: Using API Version  1
	I0114 02:24:17.915956    5553 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 02:24:17.916192    5553 main.go:134] libmachine: () Calling .GetMachineName
	I0114 02:24:17.916292    5553 main.go:134] libmachine: (multinode-022105-m03) Calling .GetState
	I0114 02:24:17.916366    5553 main.go:134] libmachine: (multinode-022105-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 02:24:17.916443    5553 main.go:134] libmachine: (multinode-022105-m03) DBG | hyperkit pid from json: 5318
	I0114 02:24:17.917512    5553 main.go:134] libmachine: (multinode-022105-m03) DBG | hyperkit pid 5318 missing from process table
	I0114 02:24:17.917537    5553 status.go:330] multinode-022105-m03 host status = "Stopped" (err=<nil>)
	I0114 02:24:17.917542    5553 status.go:343] host is not running, skipping remaining checks
	I0114 02:24:17.917548    5553 status.go:257] multinode-022105-m03 status: &{Name:multinode-022105-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.69s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 node start m03 --alsologtostderr
E0114 02:24:27.282480    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-022105 node start m03 --alsologtostderr: (30.738335468s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (862.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-022105
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-022105
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-022105: (12.425630819s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-022105 --wait=true -v=8 --alsologtostderr
E0114 02:26:43.424937    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 02:27:11.120997    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 02:28:10.906356    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 02:28:16.854930    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
E0114 02:29:33.995569    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 02:31:43.420567    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 02:33:10.928864    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 02:33:16.875884    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
E0114 02:34:39.930979    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
E0114 02:36:43.445146    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 02:38:06.501611    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 02:38:10.927679    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 02:38:16.874527    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-022105 --wait=true -v=8 --alsologtostderr: (14m10.250572287s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-022105
--- PASS: TestMultiNode/serial/RestartKeepsNodes (862.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-022105 node delete m03: (4.663984913s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 status --alsologtostderr
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (4.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-022105 stop: (4.291640308s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-022105 status: exit status 7 (76.201872ms)

                                                
                                                
-- stdout --
	multinode-022105
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-022105-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-022105 status --alsologtostderr: exit status 7 (75.390871ms)

                                                
                                                
-- stdout --
	multinode-022105
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-022105-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 02:39:21.235637    6435 out.go:296] Setting OutFile to fd 1 ...
	I0114 02:39:21.235893    6435 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:39:21.235900    6435 out.go:309] Setting ErrFile to fd 2...
	I0114 02:39:21.235904    6435 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:39:21.236006    6435 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1627/.minikube/bin
	I0114 02:39:21.236214    6435 out.go:303] Setting JSON to false
	I0114 02:39:21.236240    6435 mustload.go:65] Loading cluster: multinode-022105
	I0114 02:39:21.236289    6435 notify.go:220] Checking for updates...
	I0114 02:39:21.236555    6435 config.go:180] Loaded profile config "multinode-022105": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:39:21.236568    6435 status.go:255] checking status of multinode-022105 ...
	I0114 02:39:21.236920    6435 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 02:39:21.236967    6435 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 02:39:21.243463    6435 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51646
	I0114 02:39:21.243782    6435 main.go:134] libmachine: () Calling .GetVersion
	I0114 02:39:21.244152    6435 main.go:134] libmachine: Using API Version  1
	I0114 02:39:21.244163    6435 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 02:39:21.244351    6435 main.go:134] libmachine: () Calling .GetMachineName
	I0114 02:39:21.244443    6435 main.go:134] libmachine: (multinode-022105) Calling .GetState
	I0114 02:39:21.244516    6435 main.go:134] libmachine: (multinode-022105) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 02:39:21.244587    6435 main.go:134] libmachine: (multinode-022105) DBG | hyperkit pid from json: 5645
	I0114 02:39:21.245417    6435 main.go:134] libmachine: (multinode-022105) DBG | hyperkit pid 5645 missing from process table
	I0114 02:39:21.245451    6435 status.go:330] multinode-022105 host status = "Stopped" (err=<nil>)
	I0114 02:39:21.245458    6435 status.go:343] host is not running, skipping remaining checks
	I0114 02:39:21.245463    6435 status.go:257] multinode-022105 status: &{Name:multinode-022105 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0114 02:39:21.245479    6435 status.go:255] checking status of multinode-022105-m02 ...
	I0114 02:39:21.245737    6435 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0114 02:39:21.245761    6435 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I0114 02:39:21.252295    6435 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51648
	I0114 02:39:21.252612    6435 main.go:134] libmachine: () Calling .GetVersion
	I0114 02:39:21.252946    6435 main.go:134] libmachine: Using API Version  1
	I0114 02:39:21.252963    6435 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 02:39:21.253148    6435 main.go:134] libmachine: () Calling .GetMachineName
	I0114 02:39:21.253237    6435 main.go:134] libmachine: (multinode-022105-m02) Calling .GetState
	I0114 02:39:21.253312    6435 main.go:134] libmachine: (multinode-022105-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0114 02:39:21.253381    6435 main.go:134] libmachine: (multinode-022105-m02) DBG | hyperkit pid from json: 5930
	I0114 02:39:21.254216    6435 main.go:134] libmachine: (multinode-022105-m02) DBG | hyperkit pid 5930 missing from process table
	I0114 02:39:21.254263    6435 status.go:330] multinode-022105-m02 host status = "Stopped" (err=<nil>)
	I0114 02:39:21.254273    6435 status.go:343] host is not running, skipping remaining checks
	I0114 02:39:21.254279    6435 status.go:257] multinode-022105-m02 status: &{Name:multinode-022105-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (4.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (578.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-022105 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0114 02:41:43.441304    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 02:43:10.923323    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 02:43:16.871230    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
E0114 02:46:14.012958    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 02:46:43.437434    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 02:48:10.962824    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 02:48:16.911534    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-022105 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (9m38.23987134s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022105 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (578.57s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-022105
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-022105-m02 --driver=hyperkit 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-022105-m02 --driver=hyperkit : exit status 14 (359.43037ms)

                                                
                                                
-- stdout --
	* [multinode-022105-m02] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1627/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-022105-m02' is duplicated with machine name 'multinode-022105-m02' in profile 'multinode-022105'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-022105-m03 --driver=hyperkit 
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-022105-m03 --driver=hyperkit : (39.818954724s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-022105
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-022105: exit status 80 (284.810617ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-022105
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-022105-m03 already exists in multinode-022105-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-022105-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-022105-m03: (3.427745559s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.95s)

                                                
                                    
x
+
TestPreload (136.4s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-024948 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-024948 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m6.5538886s)
preload_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-024948 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-024948 -- docker pull gcr.io/k8s-minikube/busybox: (2.602023087s)
preload_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-024948 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit  --kubernetes-version=v1.24.6
E0114 02:51:19.967312    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
E0114 02:51:43.480719    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
preload_test.go:67: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-024948 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit  --kubernetes-version=v1.24.6: (1m1.808110352s)
preload_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-024948 -- docker images
helpers_test.go:175: Cleaning up "test-preload-024948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-024948
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-024948: (5.274906742s)
--- PASS: TestPreload (136.40s)

                                                
                                    
x
+
TestScheduledStopUnix (108.93s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-025204 --memory=2048 --driver=hyperkit 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-025204 --memory=2048 --driver=hyperkit : (37.498004912s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-025204 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-025204 -n scheduled-stop-025204
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-025204 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-025204 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-025204 -n scheduled-stop-025204
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-025204
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-025204 --schedule 15s
E0114 02:53:10.963329    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 02:53:16.911182    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-025204
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-025204: exit status 7 (67.627176ms)

                                                
                                                
-- stdout --
	scheduled-stop-025204
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-025204 -n scheduled-stop-025204
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-025204 -n scheduled-stop-025204: exit status 7 (65.771555ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-025204" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-025204
--- PASS: TestScheduledStopUnix (108.93s)

                                                
                                    
x
+
TestSkaffold (74.14s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe1860092310 version
skaffold_test.go:63: skaffold version: v2.0.4
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-025353 --memory=2600 --driver=hyperkit 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-025353 --memory=2600 --driver=hyperkit : (40.260136275s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe1860092310 run --minikube-profile skaffold-025353 --kube-context skaffold-025353 --status-check=true --port-forward=false --interactive=false
E0114 02:54:46.537504    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe1860092310 run --minikube-profile skaffold-025353 --kube-context skaffold-025353 --status-check=true --port-forward=false --interactive=false: (17.315819362s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-8544d7b97c-9m6pt" [fc0b25b8-548f-42b5-86d6-0561eab6b9f3] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.009994986s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-5d45c54d47-s8d7p" [36d5d7c8-e539-47d9-82e5-dc3a1b08ab34] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006793003s
helpers_test.go:175: Cleaning up "skaffold-025353" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-025353
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-025353: (3.468691935s)
--- PASS: TestSkaffold (74.14s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (162.8s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.6.2.2841957136.exe start -p running-upgrade-030435 --memory=2200 --vm-driver=hyperkit 
E0114 03:04:54.465559    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.6.2.2841957136.exe start -p running-upgrade-030435 --memory=2200 --vm-driver=hyperkit : (1m33.848584211s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-030435 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-030435 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m1.961564552s)
helpers_test.go:175: Cleaning up "running-upgrade-030435" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-030435
E0114 03:07:14.946492    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-030435: (5.285863115s)
--- PASS: TestRunningBinaryUpgrade (162.80s)

                                                
                                    
x
+
TestKubernetesUpgrade (139.11s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-030216 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-030216 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit : (1m12.386071702s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-030216
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-030216: (2.23653791s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-030216 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-030216 status --format={{.Host}}: exit status 7 (69.866136ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-030216 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=hyperkit 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-030216 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=hyperkit : (37.693821543s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-030216 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-030216 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit 
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-030216 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit : exit status 106 (727.98192ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-030216] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1627/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.25.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-030216
	    minikube start -p kubernetes-upgrade-030216 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0302162 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.25.3, by running:
	    
	    minikube start -p kubernetes-upgrade-030216 --kubernetes-version=v1.25.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-030216 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=hyperkit 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-030216 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=hyperkit : (22.447719986s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-030216" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-030216
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-030216: (3.497689386s)
--- PASS: TestKubernetesUpgrade (139.11s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.85s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.28.0 on darwin
- MINIKUBE_LOCATION=15642
- KUBECONFIG=/Users/jenkins/minikube-integration/15642-1627/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2119201204/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2119201204/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2119201204/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2119201204/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.85s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.95s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.28.0 on darwin
- MINIKUBE_LOCATION=15642
- KUBECONFIG=/Users/jenkins/minikube-integration/15642-1627/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2950920143/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2950920143/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2950920143/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2950920143/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (405.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-025507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=hyperkit 
E0114 02:56:43.479502    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 02:58:10.960980    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 02:58:16.909045    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
E0114 02:59:54.423595    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory
E0114 02:59:54.429252    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory
E0114 02:59:54.441430    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory
E0114 02:59:54.462595    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory
E0114 02:59:54.504278    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory
E0114 02:59:54.586458    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory
E0114 02:59:54.748053    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory
E0114 02:59:55.070238    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory
E0114 02:59:55.711851    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory
E0114 02:59:56.994152    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory
E0114 02:59:59.555590    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory
E0114 03:00:04.677803    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory
E0114 03:00:14.919942    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory
E0114 03:00:35.399939    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory
E0114 03:01:16.360408    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory
E0114 03:01:43.475367    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p auto-025507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=hyperkit : (6m45.597980141s)
--- PASS: TestNetworkPlugins/group/auto/Start (405.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-025507 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-025507 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-h55jv" [2546284c-bc3f-43d2-b554-3807b1c8fd0b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-h55jv" [2546284c-bc3f-43d2-b554-3807b1c8fd0b] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.00426438s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-025507 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.104430066s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (170.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.6.2.1201332455.exe start -p stopped-upgrade-030226 --memory=2200 --vm-driver=hyperkit 
E0114 03:02:38.291840    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory
E0114 03:02:54.082789    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 03:03:10.997025    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 03:03:16.946432    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.6.2.1201332455.exe start -p stopped-upgrade-030226 --memory=2200 --vm-driver=hyperkit : (1m40.429140124s)
version_upgrade_test.go:199: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.6.2.1201332455.exe -p stopped-upgrade-030226 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:199: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.6.2.1201332455.exe -p stopped-upgrade-030226 stop: (8.139264285s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-030226 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-030226 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m1.493090188s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (170.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-030226
version_upgrade_test.go:213: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-030226: (2.621380192s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.62s)

                                                
                                    
x
+
TestPause/serial/Start (62.33s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-030526 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-030526 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : (1m2.328001184s)
--- PASS: TestPause/serial/Start (62.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-030718 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-030718 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (490.279144ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-030718] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1627/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-030718 --driver=hyperkit 
E0114 03:07:25.188274    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-030718 --driver=hyperkit : (42.184496414s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-030718 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (105.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-025507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=hyperkit 
E0114 03:07:45.669273    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:08:00.004114    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p cilium-025507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=hyperkit : (1m45.293126471s)
--- PASS: TestNetworkPlugins/group/cilium/Start (105.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-030718 --no-kubernetes --driver=hyperkit 
E0114 03:08:10.999882    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-030718 --no-kubernetes --driver=hyperkit : (13.913120569s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-030718 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-030718 status -o json: exit status 2 (143.536627ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-030718","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-030718
E0114 03:08:16.947095    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-030718: (2.423141211s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (14.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-030718 --no-kubernetes --driver=hyperkit 
E0114 03:08:26.629647    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-030718 --no-kubernetes --driver=hyperkit : (14.927736204s)
--- PASS: TestNoKubernetes/serial/Start (14.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-030718 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-030718 "sudo systemctl is-active --quiet service kubelet": exit status 1 (174.356088ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-030718
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-030718: (2.252926318s)
--- PASS: TestNoKubernetes/serial/Stop (2.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (15.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-030718 --driver=hyperkit 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-030718 --driver=hyperkit : (15.12567622s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (15.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-030718 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-030718 "sudo systemctl is-active --quiet service kubelet": exit status 1 (132.176018ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (308.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-025507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=hyperkit 

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p calico-025507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=hyperkit : (5m8.138217006s)
--- PASS: TestNetworkPlugins/group/calico/Start (308.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-jx9qb" [fed873e5-7b33-44cb-8bb2-0689ff7595c6] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.015691602s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cilium-025507 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (13.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-025507 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-dvgjq" [57c96824-c80c-4d42-8749-dae1da8bb0ec] Pending
helpers_test.go:342: "netcat-5788d667bd-dvgjq" [57c96824-c80c-4d42-8749-dae1da8bb0ec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-dvgjq" [57c96824-c80c-4d42-8749-dae1da8bb0ec] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 13.006619235s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (13.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-025507 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-025507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
E0114 03:09:48.550458    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:09:54.462453    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-025507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (57.880965475s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-025507 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context custom-flannel-025507 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-z5795" [7777c42b-308b-4b77-b5d5-93c82534925c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-z5795" [7777c42b-308b-4b77-b5d5-93c82534925c] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.006459174s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:169: (dbg) Run:  kubectl --context custom-flannel-025507 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:188: (dbg) Run:  kubectl --context custom-flannel-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:238: (dbg) Run:  kubectl --context custom-flannel-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (101.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p false-025507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=hyperkit 
E0114 03:11:26.574636    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 03:11:43.516338    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 03:12:04.703021    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:12:32.390573    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p false-025507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=hyperkit : (1m41.346159525s)
--- PASS: TestNetworkPlugins/group/false/Start (101.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-025507 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-025507 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-hdbvw" [817cbbd1-d9a1-4ba3-8b2c-d8ba01ea0313] Pending
helpers_test.go:342: "netcat-5788d667bd-hdbvw" [817cbbd1-d9a1-4ba3-8b2c-d8ba01ea0313] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-hdbvw" [817cbbd1-d9a1-4ba3-8b2c-d8ba01ea0313] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.007095951s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-025507 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:188: (dbg) Run:  kubectl --context false-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Run:  kubectl --context false-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context false-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.108621974s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-025507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=hyperkit 
E0114 03:13:10.997871    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 03:13:16.945296    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-025507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=hyperkit : (1m10.54433935s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-nb8tb" [2c98907b-7f23-4871-94d0-4ac142830d8b] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.013250055s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-025507 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-025507 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-5995r" [0d1b5ff1-c726-47b9-b2b8-06df8ebfa388] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-5995r" [0d1b5ff1-c726-47b9-b2b8-06df8ebfa388] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.006783151s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-pjv62" [49d09a7e-4791-4e55-bc97-6a7195629a5c] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.011551114s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-025507 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:188: (dbg) Run:  kubectl --context calico-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:238: (dbg) Run:  kubectl --context calico-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-025507 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-025507 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-jk5pn" [acc0afe3-fe66-4598-a765-61e54fd145e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0114 03:14:23.691543    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/cilium-025507/client.crt: no such file or directory
E0114 03:14:23.696910    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/cilium-025507/client.crt: no such file or directory
E0114 03:14:23.707043    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/cilium-025507/client.crt: no such file or directory
E0114 03:14:23.728660    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/cilium-025507/client.crt: no such file or directory
E0114 03:14:23.769885    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/cilium-025507/client.crt: no such file or directory
E0114 03:14:23.851667    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/cilium-025507/client.crt: no such file or directory
E0114 03:14:24.013112    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/cilium-025507/client.crt: no such file or directory
E0114 03:14:24.334455    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/cilium-025507/client.crt: no such file or directory
E0114 03:14:24.976175    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/cilium-025507/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-jk5pn" [acc0afe3-fe66-4598-a765-61e54fd145e2] Running
E0114 03:14:33.937975    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/cilium-025507/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.007002477s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-025507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=hyperkit 
E0114 03:14:26.256876    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/cilium-025507/client.crt: no such file or directory
E0114 03:14:28.817140    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/cilium-025507/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-025507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=hyperkit : (54.357444726s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-025507 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (55.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-025507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=hyperkit 
E0114 03:14:44.178370    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/cilium-025507/client.crt: no such file or directory
E0114 03:14:54.460853    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory
E0114 03:15:04.660142    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/cilium-025507/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-025507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=hyperkit : (55.943329774s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (55.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (8.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-system" ...
helpers_test.go:342: "kube-flannel-ds-amd64-j4srj" [e2eebe7c-0959-4b2c-8aae-15c2413f8e5c] Pending: Initialized:ContainersNotInitialized (containers with incomplete status: [install-cni]) / Ready:ContainersNotReady (containers with unready status: [kube-flannel]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-flannel])
helpers_test.go:342: "kube-flannel-ds-amd64-j4srj" [e2eebe7c-0959-4b2c-8aae-15c2413f8e5c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-flannel]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-flannel])
helpers_test.go:342: "kube-flannel-ds-amd64-j4srj" [e2eebe7c-0959-4b2c-8aae-15c2413f8e5c] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 8.012469374s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (8.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-025507 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context flannel-025507 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-8mnh2" [5bb6a312-58e4-4879-888a-595242954235] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-8mnh2" [5bb6a312-58e4-4879-888a-595242954235] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.006100013s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-025507 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-025507 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-6vzg9" [a57510fe-9e92-4be5-bd39-fa0c7dd3185c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-6vzg9" [a57510fe-9e92-4be5-bd39-fa0c7dd3185c] Running
E0114 03:15:44.745821    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/custom-flannel-025507/client.crt: no such file or directory
E0114 03:15:44.751077    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/custom-flannel-025507/client.crt: no such file or directory
E0114 03:15:44.762254    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/custom-flannel-025507/client.crt: no such file or directory
E0114 03:15:44.782476    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/custom-flannel-025507/client.crt: no such file or directory
E0114 03:15:44.822939    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/custom-flannel-025507/client.crt: no such file or directory
E0114 03:15:44.905129    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/custom-flannel-025507/client.crt: no such file or directory
E0114 03:15:45.065376    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/custom-flannel-025507/client.crt: no such file or directory
E0114 03:15:45.385627    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/custom-flannel-025507/client.crt: no such file or directory
E0114 03:15:45.621495    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/cilium-025507/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.006625856s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:169: (dbg) Run:  kubectl --context flannel-025507 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:188: (dbg) Run:  kubectl --context flannel-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:238: (dbg) Run:  kubectl --context flannel-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-025507 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0114 03:15:46.025835    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/custom-flannel-025507/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (53.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-025507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=hyperkit 

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-025507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=hyperkit : (53.809000902s)
--- PASS: TestNetworkPlugins/group/bridge/Start (53.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (64.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-025507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=hyperkit 
E0114 03:15:54.989719    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/custom-flannel-025507/client.crt: no such file or directory
E0114 03:16:05.231111    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/custom-flannel-025507/client.crt: no such file or directory
E0114 03:16:17.521524    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory
E0114 03:16:25.711361    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/custom-flannel-025507/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-025507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=hyperkit : (1m4.677196382s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (64.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-025507 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-025507 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-7c9sq" [0a73b1d2-739b-4aff-8e3e-2e2e8a5d18b6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0114 03:16:43.514516    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-7c9sq" [0a73b1d2-739b-4aff-8e3e-2e2e8a5d18b6] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.006205021s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-025507 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-025507 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-025507 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-lgfcw" [0305200f-6d60-46dd-a305-356a19f15ec7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-lgfcw" [0305200f-6d60-46dd-a305-356a19f15ec7] Running
E0114 03:17:04.700488    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:17:06.671411    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/custom-flannel-025507/client.crt: no such file or directory
E0114 03:17:07.541814    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/cilium-025507/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.005638802s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-025507 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kubenet-025507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (144.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-031958 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
E0114 03:19:59.248061    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kindnet-025507/client.crt: no such file or directory
E0114 03:20:20.046355    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/flannel-025507/client.crt: no such file or directory
E0114 03:20:20.052648    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/flannel-025507/client.crt: no such file or directory
E0114 03:20:20.063491    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/flannel-025507/client.crt: no such file or directory
E0114 03:20:20.083808    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/flannel-025507/client.crt: no such file or directory
E0114 03:20:20.124548    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/flannel-025507/client.crt: no such file or directory
E0114 03:20:20.205084    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/flannel-025507/client.crt: no such file or directory
E0114 03:20:20.366273    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/flannel-025507/client.crt: no such file or directory
E0114 03:20:20.687500    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/flannel-025507/client.crt: no such file or directory
E0114 03:20:21.328178    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/flannel-025507/client.crt: no such file or directory
E0114 03:20:22.608495    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/flannel-025507/client.crt: no such file or directory
E0114 03:20:23.529088    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/calico-025507/client.crt: no such file or directory
E0114 03:20:25.170762    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/flannel-025507/client.crt: no such file or directory
E0114 03:20:28.909625    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/false-025507/client.crt: no such file or directory
E0114 03:20:30.291147    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/flannel-025507/client.crt: no such file or directory
E0114 03:20:34.784338    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/enable-default-cni-025507/client.crt: no such file or directory
E0114 03:20:34.790019    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/enable-default-cni-025507/client.crt: no such file or directory
E0114 03:20:34.802141    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/enable-default-cni-025507/client.crt: no such file or directory
E0114 03:20:34.822707    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/enable-default-cni-025507/client.crt: no such file or directory
E0114 03:20:34.864953    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/enable-default-cni-025507/client.crt: no such file or directory
E0114 03:20:34.946374    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/enable-default-cni-025507/client.crt: no such file or directory
E0114 03:20:35.107434    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/enable-default-cni-025507/client.crt: no such file or directory
E0114 03:20:35.427779    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/enable-default-cni-025507/client.crt: no such file or directory
E0114 03:20:36.068757    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/enable-default-cni-025507/client.crt: no such file or directory
E0114 03:20:37.349000    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/enable-default-cni-025507/client.crt: no such file or directory
E0114 03:20:39.910810    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/enable-default-cni-025507/client.crt: no such file or directory
E0114 03:20:40.210095    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kindnet-025507/client.crt: no such file or directory
E0114 03:20:40.533375    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/flannel-025507/client.crt: no such file or directory
E0114 03:20:44.744901    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/custom-flannel-025507/client.crt: no such file or directory
E0114 03:20:45.031711    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/enable-default-cni-025507/client.crt: no such file or directory
E0114 03:20:55.271816    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/enable-default-cni-025507/client.crt: no such file or directory
E0114 03:21:01.013654    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/flannel-025507/client.crt: no such file or directory
E0114 03:21:12.430925    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/custom-flannel-025507/client.crt: no such file or directory
E0114 03:21:15.752236    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/enable-default-cni-025507/client.crt: no such file or directory
E0114 03:21:40.246791    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/bridge-025507/client.crt: no such file or directory
E0114 03:21:40.251874    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/bridge-025507/client.crt: no such file or directory
E0114 03:21:40.262933    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/bridge-025507/client.crt: no such file or directory
E0114 03:21:40.285089    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/bridge-025507/client.crt: no such file or directory
E0114 03:21:40.325515    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/bridge-025507/client.crt: no such file or directory
E0114 03:21:40.406556    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/bridge-025507/client.crt: no such file or directory
E0114 03:21:40.567599    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/bridge-025507/client.crt: no such file or directory
E0114 03:21:40.888494    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/bridge-025507/client.crt: no such file or directory
E0114 03:21:41.529477    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/bridge-025507/client.crt: no such file or directory
E0114 03:21:41.975088    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/flannel-025507/client.crt: no such file or directory
E0114 03:21:42.810407    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/bridge-025507/client.crt: no such file or directory
E0114 03:21:43.513802    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 03:21:45.371120    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/bridge-025507/client.crt: no such file or directory
E0114 03:21:45.449898    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/calico-025507/client.crt: no such file or directory
E0114 03:21:50.492284    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/bridge-025507/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-031958 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (2m24.654382538s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (144.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-031958 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [bef13775-35f9-4906-9793-ae134716cf55] Pending
helpers_test.go:342: "busybox" [bef13775-35f9-4906-9793-ae134716cf55] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
helpers_test.go:342: "busybox" [bef13775-35f9-4906-9793-ae134716cf55] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.0178501s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-031958 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-032226 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-032226 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.25.3: (1m6.564814309s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-031958 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-031958 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-031958 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-031958 --alsologtostderr -v=3: (1.252779196s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-031958 -n old-k8s-version-031958
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-031958 -n old-k8s-version-031958: exit status 7 (73.130327ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-031958 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (460.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-031958 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
E0114 03:22:37.627798    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kubenet-025507/client.crt: no such file or directory
E0114 03:22:45.061938    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/false-025507/client.crt: no such file or directory
E0114 03:23:02.175660    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/bridge-025507/client.crt: no such file or directory
E0114 03:23:03.895912    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/flannel-025507/client.crt: no such file or directory
E0114 03:23:10.994856    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 03:23:12.748943    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/false-025507/client.crt: no such file or directory
E0114 03:23:16.942956    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
E0114 03:23:18.587895    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kubenet-025507/client.crt: no such file or directory
E0114 03:23:18.631810    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/enable-default-cni-025507/client.crt: no such file or directory
E0114 03:23:27.748641    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-031958 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (7m40.344054458s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-031958 -n old-k8s-version-031958
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (460.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-032226 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [2a430a92-4bbb-484f-a5c3-b0ae0c9dee83] Pending
helpers_test.go:342: "busybox" [2a430a92-4bbb-484f-a5c3-b0ae0c9dee83] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [2a430a92-4bbb-484f-a5c3-b0ae0c9dee83] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.015325173s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-032226 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-032226 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-032226 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-032226 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-032226 --alsologtostderr -v=3: (8.249918712s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-032226 -n no-preload-032226
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-032226 -n no-preload-032226: exit status 7 (68.638873ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-032226 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (316.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-032226 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.25.3
E0114 03:24:01.536383    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/calico-025507/client.crt: no such file or directory
E0114 03:24:18.279992    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kindnet-025507/client.crt: no such file or directory
E0114 03:24:23.688348    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/cilium-025507/client.crt: no such file or directory
E0114 03:24:24.096299    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/bridge-025507/client.crt: no such file or directory
E0114 03:24:29.289388    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/calico-025507/client.crt: no such file or directory
E0114 03:24:40.000769    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
E0114 03:24:40.509674    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kubenet-025507/client.crt: no such file or directory
E0114 03:24:45.972299    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kindnet-025507/client.crt: no such file or directory
E0114 03:24:54.458103    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory
E0114 03:25:20.043832    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/flannel-025507/client.crt: no such file or directory
E0114 03:25:34.783580    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/enable-default-cni-025507/client.crt: no such file or directory
E0114 03:25:44.742380    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/custom-flannel-025507/client.crt: no such file or directory
E0114 03:25:47.737141    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/flannel-025507/client.crt: no such file or directory
E0114 03:26:02.471712    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/enable-default-cni-025507/client.crt: no such file or directory
E0114 03:26:40.246526    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/bridge-025507/client.crt: no such file or directory
E0114 03:26:43.511382    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 03:26:56.659188    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kubenet-025507/client.crt: no such file or directory
E0114 03:27:04.698492    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:27:07.936634    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/bridge-025507/client.crt: no such file or directory
E0114 03:27:24.350408    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kubenet-025507/client.crt: no such file or directory
E0114 03:27:45.061355    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/false-025507/client.crt: no such file or directory
E0114 03:28:06.570414    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 03:28:10.992948    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 03:28:16.940219    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
E0114 03:29:01.533913    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/calico-025507/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-032226 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.25.3: (5m16.10378294s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-032226 -n no-preload-032226
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (316.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-6xrws" [5b63e979-8545-4d64-be59-fa18086ba51b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-6xrws" [5b63e979-8545-4d64-be59-fa18086ba51b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.009396388s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-6xrws" [5b63e979-8545-4d64-be59-fa18086ba51b] Running
E0114 03:29:18.279450    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kindnet-025507/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004970556s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-032226 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-032226 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (1.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-032226 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-032226 -n no-preload-032226
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-032226 -n no-preload-032226: exit status 2 (156.493295ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-032226 -n no-preload-032226
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-032226 -n no-preload-032226: exit status 2 (156.193106ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-032226 --alsologtostderr -v=1
E0114 03:29:23.686553    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/cilium-025507/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-032226 -n no-preload-032226
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-032226 -n no-preload-032226
--- PASS: TestStartStop/group/no-preload/serial/Pause (1.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (59.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-032930 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.25.3
E0114 03:29:54.475037    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-032930 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.25.3: (59.630580844s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (59.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-84b68f675b-g2zz7" [c4838aca-e2bb-40a1-ad5d-9902af6aa0a9] Running
E0114 03:30:20.079255    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/flannel-025507/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009548892s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-84b68f675b-g2zz7" [c4838aca-e2bb-40a1-ad5d-9902af6aa0a9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005009803s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-031958 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p old-k8s-version-031958 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-031958 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-031958 -n old-k8s-version-031958
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-031958 -n old-k8s-version-031958: exit status 2 (170.62739ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-031958 -n old-k8s-version-031958
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-031958 -n old-k8s-version-031958: exit status 2 (166.430883ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-031958 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-031958 -n old-k8s-version-031958
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-031958 -n old-k8s-version-031958
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-032930 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [9244c636-700c-4604-a725-1535a4e094e1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:342: "busybox" [9244c636-700c-4604-a725-1535a4e094e1] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.017407048s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-032930 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-033034 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.25.3
E0114 03:30:34.820817    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/enable-default-cni-025507/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-033034 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.25.3: (56.21241105s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-032930 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-032930 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-032930 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-032930 --alsologtostderr -v=3: (3.281748378s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-032930 -n embed-certs-032930
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-032930 -n embed-certs-032930: exit status 7 (66.66179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-032930 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (315.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-032930 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.25.3
E0114 03:30:44.782200    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/custom-flannel-025507/client.crt: no such file or directory
E0114 03:30:46.802268    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/cilium-025507/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-032930 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.25.3: (5m15.380671763s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-032930 -n embed-certs-032930
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (315.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-033034 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [5148e5a8-ef01-4090-a558-a3c228fd6116] Pending
helpers_test.go:342: "busybox" [5148e5a8-ef01-4090-a558-a3c228fd6116] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [5148e5a8-ef01-4090-a558-a3c228fd6116] Running
E0114 03:31:40.285481    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/bridge-025507/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.018541967s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-033034 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-033034 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-033034 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-033034 --alsologtostderr -v=3
E0114 03:31:43.549670    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-033034 --alsologtostderr -v=3: (3.243656212s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-033034 -n default-k8s-diff-port-033034
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-033034 -n default-k8s-diff-port-033034: exit status 7 (67.740445ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-033034 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (310.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-033034 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.25.3
E0114 03:31:56.697162    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kubenet-025507/client.crt: no such file or directory
E0114 03:32:04.736708    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:32:07.829541    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/custom-flannel-025507/client.crt: no such file or directory
E0114 03:32:22.955494    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/old-k8s-version-031958/client.crt: no such file or directory
E0114 03:32:22.961854    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/old-k8s-version-031958/client.crt: no such file or directory
E0114 03:32:22.972116    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/old-k8s-version-031958/client.crt: no such file or directory
E0114 03:32:22.994284    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/old-k8s-version-031958/client.crt: no such file or directory
E0114 03:32:23.035024    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/old-k8s-version-031958/client.crt: no such file or directory
E0114 03:32:23.116037    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/old-k8s-version-031958/client.crt: no such file or directory
E0114 03:32:23.276446    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/old-k8s-version-031958/client.crt: no such file or directory
E0114 03:32:23.598010    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/old-k8s-version-031958/client.crt: no such file or directory
E0114 03:32:24.239281    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/old-k8s-version-031958/client.crt: no such file or directory
E0114 03:32:25.521066    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/old-k8s-version-031958/client.crt: no such file or directory
E0114 03:32:28.082225    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/old-k8s-version-031958/client.crt: no such file or directory
E0114 03:32:33.203709    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/old-k8s-version-031958/client.crt: no such file or directory
E0114 03:32:43.445194    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/old-k8s-version-031958/client.crt: no such file or directory
E0114 03:32:45.099826    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/false-025507/client.crt: no such file or directory
E0114 03:32:57.609178    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory
E0114 03:33:03.925553    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/old-k8s-version-031958/client.crt: no such file or directory
E0114 03:33:11.032173    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
E0114 03:33:16.980130    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/functional-021019/client.crt: no such file or directory
E0114 03:33:32.873245    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/no-preload-032226/client.crt: no such file or directory
E0114 03:33:32.879578    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/no-preload-032226/client.crt: no such file or directory
E0114 03:33:32.890617    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/no-preload-032226/client.crt: no such file or directory
E0114 03:33:32.910922    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/no-preload-032226/client.crt: no such file or directory
E0114 03:33:32.951174    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/no-preload-032226/client.crt: no such file or directory
E0114 03:33:33.032165    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/no-preload-032226/client.crt: no such file or directory
E0114 03:33:33.193336    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/no-preload-032226/client.crt: no such file or directory
E0114 03:33:33.514017    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/no-preload-032226/client.crt: no such file or directory
E0114 03:33:34.155617    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/no-preload-032226/client.crt: no such file or directory
E0114 03:33:35.436692    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/no-preload-032226/client.crt: no such file or directory
E0114 03:33:37.996997    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/no-preload-032226/client.crt: no such file or directory
E0114 03:33:43.118614    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/no-preload-032226/client.crt: no such file or directory
E0114 03:33:44.887153    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/old-k8s-version-031958/client.crt: no such file or directory
E0114 03:33:53.359836    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/no-preload-032226/client.crt: no such file or directory
E0114 03:34:01.575134    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/calico-025507/client.crt: no such file or directory
E0114 03:34:08.147550    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/false-025507/client.crt: no such file or directory
E0114 03:34:13.841952    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/no-preload-032226/client.crt: no such file or directory
E0114 03:34:18.319398    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kindnet-025507/client.crt: no such file or directory
E0114 03:34:23.726220    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/cilium-025507/client.crt: no such file or directory
E0114 03:34:54.495337    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/skaffold-025353/client.crt: no such file or directory
E0114 03:34:54.802337    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/no-preload-032226/client.crt: no such file or directory
E0114 03:35:06.807398    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/old-k8s-version-031958/client.crt: no such file or directory
E0114 03:35:20.082897    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/flannel-025507/client.crt: no such file or directory
E0114 03:35:24.687797    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/calico-025507/client.crt: no such file or directory
E0114 03:35:34.820626    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/enable-default-cni-025507/client.crt: no such file or directory
E0114 03:35:41.371127    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kindnet-025507/client.crt: no such file or directory
E0114 03:35:44.782173    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/custom-flannel-025507/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-033034 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.25.3: (5m10.337234247s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-033034 -n default-k8s-diff-port-033034
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (310.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-tlg5v" [d2d12b05-a8fe-43c9-8b09-9a37d8d27d94] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-tlg5v" [d2d12b05-a8fe-43c9-8b09-9a37d8d27d94] Running
E0114 03:36:14.125917    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/addons-020552/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.009914665s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-tlg5v" [d2d12b05-a8fe-43c9-8b09-9a37d8d27d94] Running
E0114 03:36:16.722797    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/no-preload-032226/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00623149s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-032930 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-032930 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (1.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-032930 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-032930 -n embed-certs-032930
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-032930 -n embed-certs-032930: exit status 2 (151.417598ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-032930 -n embed-certs-032930
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-032930 -n embed-certs-032930: exit status 2 (152.432649ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-032930 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-032930 -n embed-certs-032930
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-032930 -n embed-certs-032930
--- PASS: TestStartStop/group/embed-certs/serial/Pause (1.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (52.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-033627 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=hyperkit  --kubernetes-version=v1.25.3
E0114 03:36:40.284151    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/bridge-025507/client.crt: no such file or directory
E0114 03:36:43.135138    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/flannel-025507/client.crt: no such file or directory
E0114 03:36:43.548628    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-033627 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=hyperkit  --kubernetes-version=v1.25.3: (52.839378725s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (52.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-q27bh" [5f532341-b8cb-47ee-bcea-3ad10c0f7fa2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0114 03:36:56.696225    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/kubenet-025507/client.crt: no such file or directory
E0114 03:36:57.870147    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/enable-default-cni-025507/client.crt: no such file or directory
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-q27bh" [5f532341-b8cb-47ee-bcea-3ad10c0f7fa2] Running
E0114 03:37:04.736434    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.010720513s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-q27bh" [5f532341-b8cb-47ee-bcea-3ad10c0f7fa2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005058626s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-033034 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-033034 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (1.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-033034 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-033034 -n default-k8s-diff-port-033034
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-033034 -n default-k8s-diff-port-033034: exit status 2 (162.516236ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-033034 -n default-k8s-diff-port-033034
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-033034 -n default-k8s-diff-port-033034: exit status 2 (155.067592ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-033034 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-033034 -n default-k8s-diff-port-033034
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-033034 -n default-k8s-diff-port-033034
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (1.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-033627 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-033627 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-033627 --alsologtostderr -v=3: (3.253331104s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-033627 -n newest-cni-033627
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-033627 -n newest-cni-033627: exit status 7 (66.80408ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-033627 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-033627 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=hyperkit  --kubernetes-version=v1.25.3
E0114 03:37:45.100372    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/false-025507/client.crt: no such file or directory
E0114 03:37:50.647686    2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/old-k8s-version-031958/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-033627 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=hyperkit  --kubernetes-version=v1.25.3: (29.327015326s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-033627 -n newest-cni-033627
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-033627 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-033627 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-033627 -n newest-cni-033627
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-033627 -n newest-cni-033627: exit status 2 (162.9867ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-033627 -n newest-cni-033627
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-033627 -n newest-cni-033627: exit status 2 (162.080383ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-033627 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-033627 -n newest-cni-033627
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-033627 -n newest-cni-033627
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.90s)

                                                
                                    

Test skip (17/302)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.25.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.25.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:214: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:455: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:291: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-033034" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-033034
--- SKIP: TestStartStop/group/disable-driver-mounts (0.43s)

                                                
                                    
Copied to clipboard