Test Report: Hyperkit_macOS 15927

                    
                      2a1f5b9114b915ff9d9f0d64032af1cf94003b94:2023-09-20:31096
                    
                

Test fail (5/307)

Order failed test Duration
22 TestAddons/Setup 15.35
154 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
155 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
180 TestMinikubeProfile 23.88
346 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 2.76
x
+
TestAddons/Setup (15.35s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-354000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p addons-354000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: exit status 90 (15.339371358s)

                                                
                                                
-- stdout --
	* [addons-354000] minikube v1.31.2 on Darwin 13.5.2
	  - MINIKUBE_LOCATION=15927
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15927-1321/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15927-1321/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node addons-354000 in cluster addons-354000
	* Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 09:57:38.040443    1865 out.go:296] Setting OutFile to fd 1 ...
	I0920 09:57:38.040691    1865 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0920 09:57:38.040696    1865 out.go:309] Setting ErrFile to fd 2...
	I0920 09:57:38.040700    1865 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0920 09:57:38.040867    1865 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15927-1321/.minikube/bin
	I0920 09:57:38.042287    1865 out.go:303] Setting JSON to false
	I0920 09:57:38.061436    1865 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1632,"bootTime":1695227426,"procs":394,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0920 09:57:38.061567    1865 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0920 09:57:38.082670    1865 out.go:177] * [addons-354000] minikube v1.31.2 on Darwin 13.5.2
	I0920 09:57:38.125002    1865 out.go:177]   - MINIKUBE_LOCATION=15927
	I0920 09:57:38.125080    1865 notify.go:220] Checking for updates...
	I0920 09:57:38.146686    1865 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15927-1321/kubeconfig
	I0920 09:57:38.168027    1865 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0920 09:57:38.189858    1865 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 09:57:38.210899    1865 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15927-1321/.minikube
	I0920 09:57:38.232096    1865 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 09:57:38.254304    1865 driver.go:373] Setting default libvirt URI to qemu:///system
	I0920 09:57:38.282625    1865 out.go:177] * Using the hyperkit driver based on user configuration
	I0920 09:57:38.325054    1865 start.go:298] selected driver: hyperkit
	I0920 09:57:38.325081    1865 start.go:902] validating driver "hyperkit" against <nil>
	I0920 09:57:38.325105    1865 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 09:57:38.329159    1865 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 09:57:38.329270    1865 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15927-1321/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0920 09:57:38.336087    1865 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.31.2
	I0920 09:57:38.339515    1865 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 09:57:38.339531    1865 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0920 09:57:38.339559    1865 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0920 09:57:38.339773    1865 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 09:57:38.339803    1865 cni.go:84] Creating CNI manager for ""
	I0920 09:57:38.339818    1865 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 09:57:38.339826    1865 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 09:57:38.339832    1865 start_flags.go:321] config:
	{Name:addons-354000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-354000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0920 09:57:38.339959    1865 iso.go:125] acquiring lock: {Name:mkeb4366e068e3c3b5036486999179c5031df1bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 09:57:38.382017    1865 out.go:177] * Starting control plane node addons-354000 in cluster addons-354000
	I0920 09:57:38.403871    1865 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0920 09:57:38.403985    1865 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15927-1321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I0920 09:57:38.404012    1865 cache.go:57] Caching tarball of preloaded images
	I0920 09:57:38.404219    1865 preload.go:174] Found /Users/jenkins/minikube-integration/15927-1321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0920 09:57:38.404238    1865 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0920 09:57:38.404737    1865 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/addons-354000/config.json ...
	I0920 09:57:38.404782    1865 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/addons-354000/config.json: {Name:mk855d520432876a8f24cd6d436d50c7233577ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 09:57:38.405338    1865 start.go:365] acquiring machines lock for addons-354000: {Name:mk70bd0945dc485357d15820b17cfe9cfbaaacbd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 09:57:38.405510    1865 start.go:369] acquired machines lock for "addons-354000" in 153.208µs
	I0920 09:57:38.405548    1865 start.go:93] Provisioning new machine with config: &{Name:addons-354000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.2 ClusterName:addons-354000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 09:57:38.405652    1865 start.go:125] createHost starting for "" (driver="hyperkit")
	I0920 09:57:38.448053    1865 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0920 09:57:38.448453    1865 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 09:57:38.448519    1865 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 09:57:38.457056    1865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49372
	I0920 09:57:38.457494    1865 main.go:141] libmachine: () Calling .GetVersion
	I0920 09:57:38.457939    1865 main.go:141] libmachine: Using API Version  1
	I0920 09:57:38.457952    1865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 09:57:38.458180    1865 main.go:141] libmachine: () Calling .GetMachineName
	I0920 09:57:38.458300    1865 main.go:141] libmachine: (addons-354000) Calling .GetMachineName
	I0920 09:57:38.458384    1865 main.go:141] libmachine: (addons-354000) Calling .DriverName
	I0920 09:57:38.458476    1865 start.go:159] libmachine.API.Create for "addons-354000" (driver="hyperkit")
	I0920 09:57:38.458505    1865 client.go:168] LocalClient.Create starting
	I0920 09:57:38.458550    1865 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/15927-1321/.minikube/certs/ca.pem
	I0920 09:57:38.674943    1865 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/15927-1321/.minikube/certs/cert.pem
	I0920 09:57:38.799538    1865 main.go:141] libmachine: Running pre-create checks...
	I0920 09:57:38.799547    1865 main.go:141] libmachine: (addons-354000) Calling .PreCreateCheck
	I0920 09:57:38.799735    1865 main.go:141] libmachine: (addons-354000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 09:57:38.799911    1865 main.go:141] libmachine: (addons-354000) Calling .GetConfigRaw
	I0920 09:57:38.800452    1865 main.go:141] libmachine: Creating machine...
	I0920 09:57:38.800478    1865 main.go:141] libmachine: (addons-354000) Calling .Create
	I0920 09:57:38.800629    1865 main.go:141] libmachine: (addons-354000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 09:57:38.800813    1865 main.go:141] libmachine: (addons-354000) DBG | I0920 09:57:38.800636    1873 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/15927-1321/.minikube
	I0920 09:57:38.800869    1865 main.go:141] libmachine: (addons-354000) Downloading /Users/jenkins/minikube-integration/15927-1321/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15927-1321/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I0920 09:57:39.022399    1865 main.go:141] libmachine: (addons-354000) DBG | I0920 09:57:39.022246    1873 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/id_rsa...
	I0920 09:57:39.104015    1865 main.go:141] libmachine: (addons-354000) DBG | I0920 09:57:39.103911    1873 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/addons-354000.rawdisk...
	I0920 09:57:39.104028    1865 main.go:141] libmachine: (addons-354000) DBG | Writing magic tar header
	I0920 09:57:39.104037    1865 main.go:141] libmachine: (addons-354000) DBG | Writing SSH key tar header
	I0920 09:57:39.104885    1865 main.go:141] libmachine: (addons-354000) DBG | I0920 09:57:39.104784    1873 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000 ...
	I0920 09:57:39.423425    1865 main.go:141] libmachine: (addons-354000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 09:57:39.423441    1865 main.go:141] libmachine: (addons-354000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/hyperkit.pid
	I0920 09:57:39.423488    1865 main.go:141] libmachine: (addons-354000) DBG | Using UUID cdec0c48-57d6-11ee-9652-149d997fca88
	I0920 09:57:39.717319    1865 main.go:141] libmachine: (addons-354000) DBG | Generated MAC 3e:e5:64:b7:61:b7
	I0920 09:57:39.717355    1865 main.go:141] libmachine: (addons-354000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-354000
	I0920 09:57:39.717427    1865 main.go:141] libmachine: (addons-354000) DBG | 2023/09/20 09:57:39 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"cdec0c48-57d6-11ee-9652-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000294690)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/bzimage", Initrd:"/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/initrd", Bootrom:"", CPUs:2, Memory:4000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0920 09:57:39.717468    1865 main.go:141] libmachine: (addons-354000) DBG | 2023/09/20 09:57:39 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"cdec0c48-57d6-11ee-9652-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000294690)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/bzimage", Initrd:"/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/initrd", Bootrom:"", CPUs:2, Memory:4000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0920 09:57:39.717522    1865 main.go:141] libmachine: (addons-354000) DBG | 2023/09/20 09:57:39 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/hyperkit.pid", "-c", "2", "-m", "4000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "cdec0c48-57d6-11ee-9652-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/addons-354000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/tty,log=/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/bzimage,/Users/jenkins/minikube-integration/15927-1321/.minikube/machine
s/addons-354000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-354000"}
	I0920 09:57:39.717575    1865 main.go:141] libmachine: (addons-354000) DBG | 2023/09/20 09:57:39 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/hyperkit.pid -c 2 -m 4000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U cdec0c48-57d6-11ee-9652-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/addons-354000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/tty,log=/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/console-ring -f kexec,/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/bzimage,/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-354000"
	I0920 09:57:39.717604    1865 main.go:141] libmachine: (addons-354000) DBG | 2023/09/20 09:57:39 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0920 09:57:39.720437    1865 main.go:141] libmachine: (addons-354000) DBG | 2023/09/20 09:57:39 DEBUG: hyperkit: Pid is 1878
	I0920 09:57:39.720794    1865 main.go:141] libmachine: (addons-354000) DBG | Attempt 0
	I0920 09:57:39.720813    1865 main.go:141] libmachine: (addons-354000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 09:57:39.720884    1865 main.go:141] libmachine: (addons-354000) DBG | hyperkit pid from json: 1878
	I0920 09:57:39.721770    1865 main.go:141] libmachine: (addons-354000) DBG | Searching for 3e:e5:64:b7:61:b7 in /var/db/dhcpd_leases ...
	I0920 09:57:39.737872    1865 main.go:141] libmachine: (addons-354000) DBG | 2023/09/20 09:57:39 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0920 09:57:39.794841    1865 main.go:141] libmachine: (addons-354000) DBG | 2023/09/20 09:57:39 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0920 09:57:39.795561    1865 main.go:141] libmachine: (addons-354000) DBG | 2023/09/20 09:57:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0920 09:57:39.795576    1865 main.go:141] libmachine: (addons-354000) DBG | 2023/09/20 09:57:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0920 09:57:39.795590    1865 main.go:141] libmachine: (addons-354000) DBG | 2023/09/20 09:57:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0920 09:57:39.795599    1865 main.go:141] libmachine: (addons-354000) DBG | 2023/09/20 09:57:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0920 09:57:40.301200    1865 main.go:141] libmachine: (addons-354000) DBG | 2023/09/20 09:57:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0920 09:57:40.301217    1865 main.go:141] libmachine: (addons-354000) DBG | 2023/09/20 09:57:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0920 09:57:40.406275    1865 main.go:141] libmachine: (addons-354000) DBG | 2023/09/20 09:57:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0920 09:57:40.406301    1865 main.go:141] libmachine: (addons-354000) DBG | 2023/09/20 09:57:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0920 09:57:40.406327    1865 main.go:141] libmachine: (addons-354000) DBG | 2023/09/20 09:57:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0920 09:57:40.406356    1865 main.go:141] libmachine: (addons-354000) DBG | 2023/09/20 09:57:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0920 09:57:40.407246    1865 main.go:141] libmachine: (addons-354000) DBG | 2023/09/20 09:57:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0920 09:57:40.407260    1865 main.go:141] libmachine: (addons-354000) DBG | 2023/09/20 09:57:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0920 09:57:41.722291    1865 main.go:141] libmachine: (addons-354000) DBG | Attempt 1
	I0920 09:57:41.722312    1865 main.go:141] libmachine: (addons-354000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 09:57:41.722413    1865 main.go:141] libmachine: (addons-354000) DBG | hyperkit pid from json: 1878
	I0920 09:57:41.723169    1865 main.go:141] libmachine: (addons-354000) DBG | Searching for 3e:e5:64:b7:61:b7 in /var/db/dhcpd_leases ...
	I0920 09:57:43.724303    1865 main.go:141] libmachine: (addons-354000) DBG | Attempt 2
	I0920 09:57:43.724319    1865 main.go:141] libmachine: (addons-354000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 09:57:43.724392    1865 main.go:141] libmachine: (addons-354000) DBG | hyperkit pid from json: 1878
	I0920 09:57:43.725132    1865 main.go:141] libmachine: (addons-354000) DBG | Searching for 3e:e5:64:b7:61:b7 in /var/db/dhcpd_leases ...
	I0920 09:57:45.301294    1865 main.go:141] libmachine: (addons-354000) DBG | 2023/09/20 09:57:45 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0920 09:57:45.301341    1865 main.go:141] libmachine: (addons-354000) DBG | 2023/09/20 09:57:45 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0920 09:57:45.301353    1865 main.go:141] libmachine: (addons-354000) DBG | 2023/09/20 09:57:45 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0920 09:57:45.727227    1865 main.go:141] libmachine: (addons-354000) DBG | Attempt 3
	I0920 09:57:45.727243    1865 main.go:141] libmachine: (addons-354000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 09:57:45.727355    1865 main.go:141] libmachine: (addons-354000) DBG | hyperkit pid from json: 1878
	I0920 09:57:45.728089    1865 main.go:141] libmachine: (addons-354000) DBG | Searching for 3e:e5:64:b7:61:b7 in /var/db/dhcpd_leases ...
	I0920 09:57:47.728753    1865 main.go:141] libmachine: (addons-354000) DBG | Attempt 4
	I0920 09:57:47.728770    1865 main.go:141] libmachine: (addons-354000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 09:57:47.728852    1865 main.go:141] libmachine: (addons-354000) DBG | hyperkit pid from json: 1878
	I0920 09:57:47.729577    1865 main.go:141] libmachine: (addons-354000) DBG | Searching for 3e:e5:64:b7:61:b7 in /var/db/dhcpd_leases ...
	I0920 09:57:49.730857    1865 main.go:141] libmachine: (addons-354000) DBG | Attempt 5
	I0920 09:57:49.730891    1865 main.go:141] libmachine: (addons-354000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 09:57:49.731015    1865 main.go:141] libmachine: (addons-354000) DBG | hyperkit pid from json: 1878
	I0920 09:57:49.732363    1865 main.go:141] libmachine: (addons-354000) DBG | Searching for 3e:e5:64:b7:61:b7 in /var/db/dhcpd_leases ...
	I0920 09:57:49.732398    1865 main.go:141] libmachine: (addons-354000) DBG | Found 1 entries in /var/db/dhcpd_leases!
	I0920 09:57:49.732452    1865 main.go:141] libmachine: (addons-354000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:3e:e5:64:b7:61:b7 ID:1,3e:e5:64:b7:61:b7 Lease:0x650c760c}
	I0920 09:57:49.732473    1865 main.go:141] libmachine: (addons-354000) DBG | Found match: 3e:e5:64:b7:61:b7
	I0920 09:57:49.732488    1865 main.go:141] libmachine: (addons-354000) DBG | IP: 192.168.64.2
	I0920 09:57:49.732539    1865 main.go:141] libmachine: (addons-354000) Calling .GetConfigRaw
	I0920 09:57:49.733379    1865 main.go:141] libmachine: (addons-354000) Calling .DriverName
	I0920 09:57:49.733531    1865 main.go:141] libmachine: (addons-354000) Calling .DriverName
	I0920 09:57:49.733671    1865 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 09:57:49.733689    1865 main.go:141] libmachine: (addons-354000) Calling .GetState
	I0920 09:57:49.733819    1865 main.go:141] libmachine: (addons-354000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 09:57:49.733874    1865 main.go:141] libmachine: (addons-354000) DBG | hyperkit pid from json: 1878
	I0920 09:57:49.734825    1865 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 09:57:49.734842    1865 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 09:57:49.734849    1865 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 09:57:49.734856    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHHostname
	I0920 09:57:49.734976    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHPort
	I0920 09:57:49.735062    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHKeyPath
	I0920 09:57:49.735158    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHKeyPath
	I0920 09:57:49.735246    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHUsername
	I0920 09:57:49.735969    1865 main.go:141] libmachine: Using SSH client type: native
	I0920 09:57:49.736273    1865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.2 22 <nil> <nil>}
	I0920 09:57:49.736281    1865 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 09:57:49.794668    1865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 09:57:49.794680    1865 main.go:141] libmachine: Detecting the provisioner...
	I0920 09:57:49.794686    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHHostname
	I0920 09:57:49.794816    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHPort
	I0920 09:57:49.794912    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHKeyPath
	I0920 09:57:49.794996    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHKeyPath
	I0920 09:57:49.795082    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHUsername
	I0920 09:57:49.795209    1865 main.go:141] libmachine: Using SSH client type: native
	I0920 09:57:49.795477    1865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.2 22 <nil> <nil>}
	I0920 09:57:49.795485    1865 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 09:57:49.855811    1865 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0920 09:57:49.855850    1865 main.go:141] libmachine: found compatible host: buildroot
	I0920 09:57:49.855856    1865 main.go:141] libmachine: Provisioning with buildroot...
	I0920 09:57:49.855862    1865 main.go:141] libmachine: (addons-354000) Calling .GetMachineName
	I0920 09:57:49.855994    1865 buildroot.go:166] provisioning hostname "addons-354000"
	I0920 09:57:49.856006    1865 main.go:141] libmachine: (addons-354000) Calling .GetMachineName
	I0920 09:57:49.856094    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHHostname
	I0920 09:57:49.856172    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHPort
	I0920 09:57:49.856255    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHKeyPath
	I0920 09:57:49.856321    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHKeyPath
	I0920 09:57:49.856398    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHUsername
	I0920 09:57:49.856515    1865 main.go:141] libmachine: Using SSH client type: native
	I0920 09:57:49.856758    1865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.2 22 <nil> <nil>}
	I0920 09:57:49.856767    1865 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-354000 && echo "addons-354000" | sudo tee /etc/hostname
	I0920 09:57:49.925276    1865 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-354000
	
	I0920 09:57:49.925294    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHHostname
	I0920 09:57:49.925422    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHPort
	I0920 09:57:49.925504    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHKeyPath
	I0920 09:57:49.925594    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHKeyPath
	I0920 09:57:49.925665    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHUsername
	I0920 09:57:49.925788    1865 main.go:141] libmachine: Using SSH client type: native
	I0920 09:57:49.926032    1865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.2 22 <nil> <nil>}
	I0920 09:57:49.926044    1865 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-354000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-354000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-354000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 09:57:49.992553    1865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 09:57:49.992581    1865 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15927-1321/.minikube CaCertPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15927-1321/.minikube}
	I0920 09:57:49.992605    1865 buildroot.go:174] setting up certificates
	I0920 09:57:49.992613    1865 provision.go:83] configureAuth start
	I0920 09:57:49.992622    1865 main.go:141] libmachine: (addons-354000) Calling .GetMachineName
	I0920 09:57:49.992769    1865 main.go:141] libmachine: (addons-354000) Calling .GetIP
	I0920 09:57:49.992872    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHHostname
	I0920 09:57:49.992958    1865 provision.go:138] copyHostCerts
	I0920 09:57:49.993058    1865 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15927-1321/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15927-1321/.minikube/key.pem (1675 bytes)
	I0920 09:57:49.993359    1865 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15927-1321/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15927-1321/.minikube/ca.pem (1082 bytes)
	I0920 09:57:49.993528    1865 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15927-1321/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15927-1321/.minikube/cert.pem (1123 bytes)
	I0920 09:57:49.993652    1865 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15927-1321/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15927-1321/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15927-1321/.minikube/certs/ca-key.pem org=jenkins.addons-354000 san=[192.168.64.2 192.168.64.2 localhost 127.0.0.1 minikube addons-354000]
	I0920 09:57:50.104288    1865 provision.go:172] copyRemoteCerts
	I0920 09:57:50.104342    1865 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 09:57:50.104358    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHHostname
	I0920 09:57:50.104479    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHPort
	I0920 09:57:50.104564    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHKeyPath
	I0920 09:57:50.104655    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHUsername
	I0920 09:57:50.104758    1865 sshutil.go:53] new ssh client: &{IP:192.168.64.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/id_rsa Username:docker}
	I0920 09:57:50.140790    1865 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15927-1321/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0920 09:57:50.157009    1865 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15927-1321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 09:57:50.172880    1865 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15927-1321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 09:57:50.188755    1865 provision.go:86] duration metric: configureAuth took 196.126283ms
	I0920 09:57:50.188768    1865 buildroot.go:189] setting minikube options for container-runtime
	I0920 09:57:50.188900    1865 config.go:182] Loaded profile config "addons-354000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0920 09:57:50.188917    1865 main.go:141] libmachine: (addons-354000) Calling .DriverName
	I0920 09:57:50.189056    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHHostname
	I0920 09:57:50.189134    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHPort
	I0920 09:57:50.189218    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHKeyPath
	I0920 09:57:50.189302    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHKeyPath
	I0920 09:57:50.189387    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHUsername
	I0920 09:57:50.189491    1865 main.go:141] libmachine: Using SSH client type: native
	I0920 09:57:50.189715    1865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.2 22 <nil> <nil>}
	I0920 09:57:50.189724    1865 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 09:57:50.251154    1865 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0920 09:57:50.251165    1865 buildroot.go:70] root file system type: tmpfs
	I0920 09:57:50.251227    1865 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 09:57:50.251238    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHHostname
	I0920 09:57:50.251361    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHPort
	I0920 09:57:50.251452    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHKeyPath
	I0920 09:57:50.251537    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHKeyPath
	I0920 09:57:50.251624    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHUsername
	I0920 09:57:50.251744    1865 main.go:141] libmachine: Using SSH client type: native
	I0920 09:57:50.251976    1865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.2 22 <nil> <nil>}
	I0920 09:57:50.252028    1865 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 09:57:50.323542    1865 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 09:57:50.323562    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHHostname
	I0920 09:57:50.323687    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHPort
	I0920 09:57:50.323773    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHKeyPath
	I0920 09:57:50.323869    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHKeyPath
	I0920 09:57:50.323947    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHUsername
	I0920 09:57:50.324077    1865 main.go:141] libmachine: Using SSH client type: native
	I0920 09:57:50.324316    1865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.2 22 <nil> <nil>}
	I0920 09:57:50.324329    1865 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 09:57:50.803631    1865 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0920 09:57:50.803654    1865 main.go:141] libmachine: Checking connection to Docker...
	I0920 09:57:50.803671    1865 main.go:141] libmachine: (addons-354000) Calling .GetURL
	I0920 09:57:50.803817    1865 main.go:141] libmachine: Docker is up and running!
	I0920 09:57:50.803825    1865 main.go:141] libmachine: Reticulating splines...
	I0920 09:57:50.803829    1865 client.go:171] LocalClient.Create took 12.345286472s
	I0920 09:57:50.803841    1865 start.go:167] duration metric: libmachine.API.Create for "addons-354000" took 12.345333366s
	I0920 09:57:50.803849    1865 start.go:300] post-start starting for "addons-354000" (driver="hyperkit")
	I0920 09:57:50.803858    1865 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 09:57:50.803871    1865 main.go:141] libmachine: (addons-354000) Calling .DriverName
	I0920 09:57:50.804008    1865 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 09:57:50.804023    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHHostname
	I0920 09:57:50.804118    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHPort
	I0920 09:57:50.804211    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHKeyPath
	I0920 09:57:50.804298    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHUsername
	I0920 09:57:50.804386    1865 sshutil.go:53] new ssh client: &{IP:192.168.64.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/id_rsa Username:docker}
	I0920 09:57:50.840051    1865 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 09:57:50.842673    1865 info.go:137] Remote host: Buildroot 2021.02.12
	I0920 09:57:50.842686    1865 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15927-1321/.minikube/addons for local assets ...
	I0920 09:57:50.842787    1865 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15927-1321/.minikube/files for local assets ...
	I0920 09:57:50.842832    1865 start.go:303] post-start completed in 38.978291ms
	I0920 09:57:50.842853    1865 main.go:141] libmachine: (addons-354000) Calling .GetConfigRaw
	I0920 09:57:50.843348    1865 main.go:141] libmachine: (addons-354000) Calling .GetIP
	I0920 09:57:50.843483    1865 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/addons-354000/config.json ...
	I0920 09:57:50.843804    1865 start.go:128] duration metric: createHost completed in 12.43809911s
	I0920 09:57:50.843818    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHHostname
	I0920 09:57:50.843891    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHPort
	I0920 09:57:50.843978    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHKeyPath
	I0920 09:57:50.844057    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHKeyPath
	I0920 09:57:50.844137    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHUsername
	I0920 09:57:50.844249    1865 main.go:141] libmachine: Using SSH client type: native
	I0920 09:57:50.844480    1865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.2 22 <nil> <nil>}
	I0920 09:57:50.844487    1865 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 09:57:50.906380    1865 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695229071.003952763
	
	I0920 09:57:50.906391    1865 fix.go:206] guest clock: 1695229071.003952763
	I0920 09:57:50.906397    1865 fix.go:219] Guest: 2023-09-20 09:57:51.003952763 -0700 PDT Remote: 2023-09-20 09:57:50.84381 -0700 PDT m=+12.833126869 (delta=160.142763ms)
	I0920 09:57:50.906418    1865 fix.go:190] guest clock delta is within tolerance: 160.142763ms
	I0920 09:57:50.906422    1865 start.go:83] releasing machines lock for "addons-354000", held for 12.500869308s
	I0920 09:57:50.906440    1865 main.go:141] libmachine: (addons-354000) Calling .DriverName
	I0920 09:57:50.906578    1865 main.go:141] libmachine: (addons-354000) Calling .GetIP
	I0920 09:57:50.906667    1865 main.go:141] libmachine: (addons-354000) Calling .DriverName
	I0920 09:57:50.907009    1865 main.go:141] libmachine: (addons-354000) Calling .DriverName
	I0920 09:57:50.907106    1865 main.go:141] libmachine: (addons-354000) Calling .DriverName
	I0920 09:57:50.907177    1865 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 09:57:50.907208    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHHostname
	I0920 09:57:50.907230    1865 ssh_runner.go:195] Run: cat /version.json
	I0920 09:57:50.907240    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHHostname
	I0920 09:57:50.907288    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHPort
	I0920 09:57:50.907316    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHPort
	I0920 09:57:50.907400    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHKeyPath
	I0920 09:57:50.907420    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHKeyPath
	I0920 09:57:50.907524    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHUsername
	I0920 09:57:50.907548    1865 main.go:141] libmachine: (addons-354000) Calling .GetSSHUsername
	I0920 09:57:50.907616    1865 sshutil.go:53] new ssh client: &{IP:192.168.64.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/id_rsa Username:docker}
	I0920 09:57:50.907636    1865 sshutil.go:53] new ssh client: &{IP:192.168.64.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/addons-354000/id_rsa Username:docker}
	I0920 09:57:50.940833    1865 ssh_runner.go:195] Run: systemctl --version
	I0920 09:57:50.987886    1865 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 09:57:50.992154    1865 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 09:57:50.992193    1865 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 09:57:51.002633    1865 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 09:57:51.002646    1865 start.go:469] detecting cgroup driver to use...
	I0920 09:57:51.002745    1865 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 09:57:51.015842    1865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0920 09:57:51.022277    1865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 09:57:51.028638    1865 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 09:57:51.028679    1865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 09:57:51.035074    1865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 09:57:51.041715    1865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 09:57:51.048309    1865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 09:57:51.054780    1865 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 09:57:51.061477    1865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 09:57:51.068035    1865 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 09:57:51.074198    1865 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 09:57:51.080127    1865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 09:57:51.171775    1865 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 09:57:51.183935    1865 start.go:469] detecting cgroup driver to use...
	I0920 09:57:51.184009    1865 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 09:57:51.193902    1865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 09:57:51.203606    1865 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 09:57:51.216904    1865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 09:57:51.225391    1865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 09:57:51.233717    1865 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0920 09:57:51.256801    1865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 09:57:51.265694    1865 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 09:57:51.277078    1865 ssh_runner.go:195] Run: which cri-dockerd
	I0920 09:57:51.279501    1865 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 09:57:51.285159    1865 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0920 09:57:51.296242    1865 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 09:57:51.377414    1865 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 09:57:51.458493    1865 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 09:57:51.458564    1865 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 09:57:51.469935    1865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 09:57:51.557484    1865 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 09:57:52.834590    1865 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.277082653s)
	I0920 09:57:52.834650    1865 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 09:57:52.917997    1865 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0920 09:57:53.003345    1865 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 09:57:53.099533    1865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 09:57:53.195585    1865 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0920 09:57:53.229520    1865 out.go:177] 
	W0920 09:57:53.251465    1865 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W0920 09:57:53.251490    1865 out.go:239] * 
	* 
	W0920 09:57:53.252607    1865 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 09:57:53.297351    1865 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:90: out/minikube-darwin-amd64 start -p addons-354000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: exit status 90
--- FAIL: TestAddons/Setup (15.35s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 9 has already been assigned to another step:
Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
Cannot use for:
Deleting "json-output-460000" in hyperkit ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: f86a6bf2-ac45-4d54-8c54-1e567423666a
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-460000] minikube v1.31.2 on Darwin 13.5.2",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 4ee0f4a9-9eb5-454b-9ef2-7a596907a5a0
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=15927"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: e4d6de94-04dc-47a1-800d-e9d8e4fc5e0e
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/Users/jenkins/minikube-integration/15927-1321/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: cf967161-a26e-4e5a-8445-198d5680c6be
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-darwin-amd64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 2760ead7-0a47-4e67-8e05-36a6d45b5099
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 12397537-ce88-4a0d-8933-df84c5ed935a
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/Users/jenkins/minikube-integration/15927-1321/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 3ec49e41-c0bf-47ab-a805-e032d5d9f213
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ef6decc2-fb79-40f0-9b58-9f00ce1297f9
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the hyperkit driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: f93745c9-0819-4511-8c9a-e66c0f37b20e
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting control plane node json-output-460000 in cluster json-output-460000",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 6fbb5d83-fbb0-40b4-8e61-86711b81955d
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7fbea46e-f3cf-493e-8952-c28473251c21
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Deleting \"json-output-460000\" in hyperkit ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 5e8f2089-44fb-4823-ba4e-97fefe7c959f
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: unexpected EOF"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 1dcb4688-79c3-4a4d-afa9-737d943f1dd0
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 8a42134e-80a9-4f20-b9bd-a8f6f03fbfe2
datacontenttype: application/json
Data,
{
"currentstep": "11",
"message": "Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...",
"name": "Preparing Kubernetes",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 14299285-dbbb-4ac4-803d-a8fbbaf39a50
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 924c8253-bb2e-49c0-bf2d-5ac9f76a84c3
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: c39cef58-3db3-47ac-b4fd-3f2dd686f5d8
datacontenttype: application/json
Data,
{
"currentstep": "14",
"message": "Configuring RBAC rules ...",
"name": "Configuring RBAC rules",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 8312f83c-a330-46ae-8e9e-d27fd171028a
datacontenttype: application/json
Data,
{
"currentstep": "15",
"message": "Configuring bridge CNI (Container Networking Interface) ...",
"name": "Configuring CNI",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 7f027716-835a-4696-be08-f80852de812c
datacontenttype: application/json
Data,
{
"message": "Using image gcr.io/k8s-minikube/storage-provisioner:v5"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 8f04617c-05fc-4706-9e7b-b57a815af405
datacontenttype: application/json
Data,
{
"currentstep": "17",
"message": "Verifying Kubernetes components...",
"name": "Verifying Kubernetes",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: b9cf2213-86bc-433d-b71b-24737a88c151
datacontenttype: application/json
Data,
{
"currentstep": "18",
"message": "Enabled addons: default-storageclass, storage-provisioner",
"name": "Enabling Addons",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 4d0bc223-8a81-42c1-b33a-ce5b6b9671cf
datacontenttype: application/json
Data,
{
"currentstep": "19",
"message": "Done! kubectl is now configured to use \"json-output-460000\" cluster and \"default\" namespace by default",
"name": "Done",
"totalsteps": "19"
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:144: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: f86a6bf2-ac45-4d54-8c54-1e567423666a
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-460000] minikube v1.31.2 on Darwin 13.5.2",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 4ee0f4a9-9eb5-454b-9ef2-7a596907a5a0
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=15927"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: e4d6de94-04dc-47a1-800d-e9d8e4fc5e0e
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/Users/jenkins/minikube-integration/15927-1321/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: cf967161-a26e-4e5a-8445-198d5680c6be
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-darwin-amd64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 2760ead7-0a47-4e67-8e05-36a6d45b5099
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 12397537-ce88-4a0d-8933-df84c5ed935a
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/Users/jenkins/minikube-integration/15927-1321/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 3ec49e41-c0bf-47ab-a805-e032d5d9f213
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ef6decc2-fb79-40f0-9b58-9f00ce1297f9
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the hyperkit driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: f93745c9-0819-4511-8c9a-e66c0f37b20e
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting control plane node json-output-460000 in cluster json-output-460000",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 6fbb5d83-fbb0-40b4-8e61-86711b81955d
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7fbea46e-f3cf-493e-8952-c28473251c21
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Deleting \"json-output-460000\" in hyperkit ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 5e8f2089-44fb-4823-ba4e-97fefe7c959f
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: unexpected EOF"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 1dcb4688-79c3-4a4d-afa9-737d943f1dd0
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 8a42134e-80a9-4f20-b9bd-a8f6f03fbfe2
datacontenttype: application/json
Data,
{
"currentstep": "11",
"message": "Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...",
"name": "Preparing Kubernetes",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 14299285-dbbb-4ac4-803d-a8fbbaf39a50
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 924c8253-bb2e-49c0-bf2d-5ac9f76a84c3
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: c39cef58-3db3-47ac-b4fd-3f2dd686f5d8
datacontenttype: application/json
Data,
{
"currentstep": "14",
"message": "Configuring RBAC rules ...",
"name": "Configuring RBAC rules",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 8312f83c-a330-46ae-8e9e-d27fd171028a
datacontenttype: application/json
Data,
{
"currentstep": "15",
"message": "Configuring bridge CNI (Container Networking Interface) ...",
"name": "Configuring CNI",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 7f027716-835a-4696-be08-f80852de812c
datacontenttype: application/json
Data,
{
"message": "Using image gcr.io/k8s-minikube/storage-provisioner:v5"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 8f04617c-05fc-4706-9e7b-b57a815af405
datacontenttype: application/json
Data,
{
"currentstep": "17",
"message": "Verifying Kubernetes components...",
"name": "Verifying Kubernetes",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: b9cf2213-86bc-433d-b71b-24737a88c151
datacontenttype: application/json
Data,
{
"currentstep": "18",
"message": "Enabled addons: default-storageclass, storage-provisioner",
"name": "Enabling Addons",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 4d0bc223-8a81-42c1-b33a-ce5b6b9671cf
datacontenttype: application/json
Data,
{
"currentstep": "19",
"message": "Done! kubectl is now configured to use \"json-output-460000\" cluster and \"default\" namespace by default",
"name": "Done",
"totalsteps": "19"
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestMinikubeProfile (23.88s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-553000 --driver=hyperkit 
E0920 10:07:26.932775    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p first-553000 --driver=hyperkit : exit status 90 (18.010211338s)

                                                
                                                
-- stdout --
	* [first-553000] minikube v1.31.2 on Darwin 13.5.2
	  - MINIKUBE_LOCATION=15927
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15927-1321/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15927-1321/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node first-553000 in cluster first-553000
	* Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-amd64 start -p first-553000 --driver=hyperkit ": exit status 90
panic.go:523: *** TestMinikubeProfile FAILED at 2023-09-20 10:07:34.252295 -0700 PDT m=+624.576452953
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p second-554000 -n second-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p second-554000 -n second-554000: exit status 85 (105.123133ms)

                                                
                                                
-- stdout --
	* Profile "second-554000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-554000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-554000" host is not running, skipping log retrieval (state="* Profile \"second-554000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-554000\"")
helpers_test.go:175: Cleaning up "second-554000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-554000
panic.go:523: *** TestMinikubeProfile FAILED at 2023-09-20 10:07:34.711936 -0700 PDT m=+625.036089710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p first-553000 -n first-553000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p first-553000 -n first-553000: exit status 6 (129.148227ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 10:07:34.831108    3231 status.go:415] kubeconfig endpoint: extract IP: "first-553000" does not appear in /Users/jenkins/minikube-integration/15927-1321/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "first-553000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "first-553000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-553000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-553000: (5.28347022s)
--- FAIL: TestMinikubeProfile (23.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p old-k8s-version-770000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p old-k8s-version-770000 "sudo crictl images -o json": exit status 1 (131.347668ms)

                                                
                                                
-- stdout --
	FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-amd64 ssh -p old-k8s-version-770000 \"sudo crictl images -o json\"": exit status 1
start_stop_delete_test.go:304: failed to decode images json invalid character '\x1b' looking for beginning of value. output:
FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-770000 -n old-k8s-version-770000
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-770000 logs -n 25
E0920 10:50:19.094242    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-770000 logs -n 25: (2.134448844s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-704000 sudo                                 | kubenet-704000         | jenkins | v1.31.2 | 20 Sep 23 10:40 PDT |                     |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-704000 sudo                                 | kubenet-704000         | jenkins | v1.31.2 | 20 Sep 23 10:40 PDT | 20 Sep 23 10:40 PDT |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p kubenet-704000 sudo find                            | kubenet-704000         | jenkins | v1.31.2 | 20 Sep 23 10:40 PDT | 20 Sep 23 10:40 PDT |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-704000 sudo crio                            | kubenet-704000         | jenkins | v1.31.2 | 20 Sep 23 10:40 PDT | 20 Sep 23 10:40 PDT |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p kubenet-704000                                      | kubenet-704000         | jenkins | v1.31.2 | 20 Sep 23 10:40 PDT | 20 Sep 23 10:40 PDT |
	| start   | -p no-preload-948000                                   | no-preload-948000      | jenkins | v1.31.2 | 20 Sep 23 10:40 PDT | 20 Sep 23 10:41 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=hyperkit                                      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-948000             | no-preload-948000      | jenkins | v1.31.2 | 20 Sep 23 10:41 PDT | 20 Sep 23 10:41 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-948000                                   | no-preload-948000      | jenkins | v1.31.2 | 20 Sep 23 10:41 PDT | 20 Sep 23 10:41 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-770000        | old-k8s-version-770000 | jenkins | v1.31.2 | 20 Sep 23 10:41 PDT | 20 Sep 23 10:41 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-770000                              | old-k8s-version-770000 | jenkins | v1.31.2 | 20 Sep 23 10:41 PDT | 20 Sep 23 10:41 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-948000                  | no-preload-948000      | jenkins | v1.31.2 | 20 Sep 23 10:41 PDT | 20 Sep 23 10:41 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-948000                                   | no-preload-948000      | jenkins | v1.31.2 | 20 Sep 23 10:41 PDT | 20 Sep 23 10:46 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=hyperkit                                      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-770000             | old-k8s-version-770000 | jenkins | v1.31.2 | 20 Sep 23 10:41 PDT | 20 Sep 23 10:41 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-770000                              | old-k8s-version-770000 | jenkins | v1.31.2 | 20 Sep 23 10:41 PDT | 20 Sep 23 10:50 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=hyperkit                                      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| ssh     | -p no-preload-948000 sudo                              | no-preload-948000      | jenkins | v1.31.2 | 20 Sep 23 10:46 PDT | 20 Sep 23 10:46 PDT |
	|         | crictl images -o json                                  |                        |         |         |                     |                     |
	| pause   | -p no-preload-948000                                   | no-preload-948000      | jenkins | v1.31.2 | 20 Sep 23 10:46 PDT | 20 Sep 23 10:46 PDT |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| unpause | -p no-preload-948000                                   | no-preload-948000      | jenkins | v1.31.2 | 20 Sep 23 10:46 PDT | 20 Sep 23 10:46 PDT |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p no-preload-948000                                   | no-preload-948000      | jenkins | v1.31.2 | 20 Sep 23 10:46 PDT | 20 Sep 23 10:47 PDT |
	| delete  | -p no-preload-948000                                   | no-preload-948000      | jenkins | v1.31.2 | 20 Sep 23 10:47 PDT | 20 Sep 23 10:47 PDT |
	| start   | -p embed-certs-564000                                  | embed-certs-564000     | jenkins | v1.31.2 | 20 Sep 23 10:47 PDT | 20 Sep 23 10:47 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --embed-certs                              |                        |         |         |                     |                     |
	|         | --driver=hyperkit                                      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-564000            | embed-certs-564000     | jenkins | v1.31.2 | 20 Sep 23 10:48 PDT | 20 Sep 23 10:48 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-564000                                  | embed-certs-564000     | jenkins | v1.31.2 | 20 Sep 23 10:48 PDT | 20 Sep 23 10:48 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-564000                 | embed-certs-564000     | jenkins | v1.31.2 | 20 Sep 23 10:48 PDT | 20 Sep 23 10:48 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-564000                                  | embed-certs-564000     | jenkins | v1.31.2 | 20 Sep 23 10:48 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --embed-certs                              |                        |         |         |                     |                     |
	|         | --driver=hyperkit                                      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                        |         |         |                     |                     |
	| ssh     | -p old-k8s-version-770000 sudo                         | old-k8s-version-770000 | jenkins | v1.31.2 | 20 Sep 23 10:50 PDT |                     |
	|         | crictl images -o json                                  |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/20 10:48:12
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.21.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 10:48:12.515223    8649 out.go:296] Setting OutFile to fd 1 ...
	I0920 10:48:12.515520    8649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0920 10:48:12.515525    8649 out.go:309] Setting ErrFile to fd 2...
	I0920 10:48:12.515529    8649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0920 10:48:12.515751    8649 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15927-1321/.minikube/bin
	I0920 10:48:12.517577    8649 out.go:303] Setting JSON to false
	I0920 10:48:12.538692    8649 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4666,"bootTime":1695227426,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0920 10:48:12.538802    8649 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0920 10:48:12.561041    8649 out.go:177] * [embed-certs-564000] minikube v1.31.2 on Darwin 13.5.2
	I0920 10:48:12.602824    8649 out.go:177]   - MINIKUBE_LOCATION=15927
	I0920 10:48:12.602950    8649 notify.go:220] Checking for updates...
	I0920 10:48:12.625054    8649 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15927-1321/kubeconfig
	I0920 10:48:12.646735    8649 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0920 10:48:07.906455    8359 out.go:204]   - Booting up control plane ...
	I0920 10:48:07.906538    8359 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 10:48:07.906607    8359 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 10:48:07.906663    8359 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 10:48:07.906736    8359 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 10:48:07.906861    8359 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 10:48:12.709831    8649 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:48:12.767914    8649 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15927-1321/.minikube
	I0920 10:48:12.810821    8649 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:48:12.833771    8649 config.go:182] Loaded profile config "embed-certs-564000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0920 10:48:12.834444    8649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:48:12.834531    8649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:48:12.842392    8649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56129
	I0920 10:48:12.842758    8649 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:48:12.843188    8649 main.go:141] libmachine: Using API Version  1
	I0920 10:48:12.843203    8649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:48:12.843449    8649 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:48:12.843558    8649 main.go:141] libmachine: (embed-certs-564000) Calling .DriverName
	I0920 10:48:12.843747    8649 driver.go:373] Setting default libvirt URI to qemu:///system
	I0920 10:48:12.843998    8649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:48:12.844025    8649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:48:12.850790    8649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56131
	I0920 10:48:12.851146    8649 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:48:12.851470    8649 main.go:141] libmachine: Using API Version  1
	I0920 10:48:12.851486    8649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:48:12.851718    8649 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:48:12.851835    8649 main.go:141] libmachine: (embed-certs-564000) Calling .DriverName
	I0920 10:48:12.878725    8649 out.go:177] * Using the hyperkit driver based on existing profile
	I0920 10:48:12.921950    8649 start.go:298] selected driver: hyperkit
	I0920 10:48:12.921977    8649 start.go:902] validating driver "hyperkit" against &{Name:embed-certs-564000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-564000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.42 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedP
orts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0920 10:48:12.922180    8649 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:48:12.926130    8649 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:48:12.926236    8649 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15927-1321/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0920 10:48:12.932980    8649 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.31.2
	I0920 10:48:12.936366    8649 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:48:12.936384    8649 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0920 10:48:12.936523    8649 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:48:12.936555    8649 cni.go:84] Creating CNI manager for ""
	I0920 10:48:12.936568    8649 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:48:12.936578    8649 start_flags.go:321] config:
	{Name:embed-certs-564000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-564000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.42 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0920 10:48:12.936718    8649 iso.go:125] acquiring lock: {Name:mkeb4366e068e3c3b5036486999179c5031df1bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:48:12.978833    8649 out.go:177] * Starting control plane node embed-certs-564000 in cluster embed-certs-564000
	I0920 10:48:13.000773    8649 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0920 10:48:13.000878    8649 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15927-1321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I0920 10:48:13.000913    8649 cache.go:57] Caching tarball of preloaded images
	I0920 10:48:13.001149    8649 preload.go:174] Found /Users/jenkins/minikube-integration/15927-1321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0920 10:48:13.001173    8649 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0920 10:48:13.001328    8649 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/embed-certs-564000/config.json ...
	I0920 10:48:13.002147    8649 start.go:365] acquiring machines lock for embed-certs-564000: {Name:mk70bd0945dc485357d15820b17cfe9cfbaaacbd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:48:13.002239    8649 start.go:369] acquired machines lock for "embed-certs-564000" in 69.507µs
	I0920 10:48:13.002275    8649 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:48:13.002291    8649 fix.go:54] fixHost starting: 
	I0920 10:48:13.002718    8649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:48:13.002760    8649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:48:13.010196    8649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56133
	I0920 10:48:13.010541    8649 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:48:13.010975    8649 main.go:141] libmachine: Using API Version  1
	I0920 10:48:13.010988    8649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:48:13.011189    8649 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:48:13.011299    8649 main.go:141] libmachine: (embed-certs-564000) Calling .DriverName
	I0920 10:48:13.011395    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetState
	I0920 10:48:13.011480    8649 main.go:141] libmachine: (embed-certs-564000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 10:48:13.011544    8649 main.go:141] libmachine: (embed-certs-564000) DBG | hyperkit pid from json: 8613
	I0920 10:48:13.012436    8649 main.go:141] libmachine: (embed-certs-564000) DBG | hyperkit pid 8613 missing from process table
	I0920 10:48:13.012458    8649 fix.go:102] recreateIfNeeded on embed-certs-564000: state=Stopped err=<nil>
	I0920 10:48:13.012474    8649 main.go:141] libmachine: (embed-certs-564000) Calling .DriverName
	W0920 10:48:13.012584    8649 fix.go:128] unexpected machine state, will restart: <nil>
	I0920 10:48:13.054842    8649 out.go:177] * Restarting existing hyperkit VM for "embed-certs-564000" ...
	I0920 10:48:13.075955    8649 main.go:141] libmachine: (embed-certs-564000) Calling .Start
	I0920 10:48:13.076266    8649 main.go:141] libmachine: (embed-certs-564000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 10:48:13.076363    8649 main.go:141] libmachine: (embed-certs-564000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/hyperkit.pid
	I0920 10:48:13.078121    8649 main.go:141] libmachine: (embed-certs-564000) DBG | hyperkit pid 8613 missing from process table
	I0920 10:48:13.078139    8649 main.go:141] libmachine: (embed-certs-564000) DBG | pid 8613 is in state "Stopped"
	I0920 10:48:13.078157    8649 main.go:141] libmachine: (embed-certs-564000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/hyperkit.pid...
	I0920 10:48:13.078418    8649 main.go:141] libmachine: (embed-certs-564000) DBG | Using UUID b682d454-57dd-11ee-b1dd-149d997fca88
	I0920 10:48:13.108242    8649 main.go:141] libmachine: (embed-certs-564000) DBG | Generated MAC 56:9b:5e:2c:e:8f
	I0920 10:48:13.108266    8649 main.go:141] libmachine: (embed-certs-564000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=embed-certs-564000
	I0920 10:48:13.108422    8649 main.go:141] libmachine: (embed-certs-564000) DBG | 2023/09/20 10:48:13 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b682d454-57dd-11ee-b1dd-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003f3b30)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/bzimage", Initrd:"/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:
(*os.Process)(nil)}
	I0920 10:48:13.108473    8649 main.go:141] libmachine: (embed-certs-564000) DBG | 2023/09/20 10:48:13 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b682d454-57dd-11ee-b1dd-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003f3b30)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/bzimage", Initrd:"/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:
(*os.Process)(nil)}
	I0920 10:48:13.108517    8649 main.go:141] libmachine: (embed-certs-564000) DBG | 2023/09/20 10:48:13 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b682d454-57dd-11ee-b1dd-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/embed-certs-564000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/tty,log=/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/bzimage,/Users/jenkins/minikube-
integration/15927-1321/.minikube/machines/embed-certs-564000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=embed-certs-564000"}
	I0920 10:48:13.108578    8649 main.go:141] libmachine: (embed-certs-564000) DBG | 2023/09/20 10:48:13 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b682d454-57dd-11ee-b1dd-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/embed-certs-564000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/tty,log=/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/console-ring -f kexec,/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/bzimage,/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/i
nitrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=embed-certs-564000"
	I0920 10:48:13.108595    8649 main.go:141] libmachine: (embed-certs-564000) DBG | 2023/09/20 10:48:13 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0920 10:48:13.109770    8649 main.go:141] libmachine: (embed-certs-564000) DBG | 2023/09/20 10:48:13 DEBUG: hyperkit: Pid is 8660
	I0920 10:48:13.110211    8649 main.go:141] libmachine: (embed-certs-564000) DBG | Attempt 0
	I0920 10:48:13.110221    8649 main.go:141] libmachine: (embed-certs-564000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 10:48:13.110291    8649 main.go:141] libmachine: (embed-certs-564000) DBG | hyperkit pid from json: 8660
	I0920 10:48:13.111838    8649 main.go:141] libmachine: (embed-certs-564000) DBG | Searching for 56:9b:5e:2c:e:8f in /var/db/dhcpd_leases ...
	I0920 10:48:13.112005    8649 main.go:141] libmachine: (embed-certs-564000) DBG | Found 41 entries in /var/db/dhcpd_leases!
	I0920 10:48:13.112021    8649 main.go:141] libmachine: (embed-certs-564000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.42 HWAddress:56:9b:5e:2c:e:8f ID:1,56:9b:5e:2c:e:8f Lease:0x650c81a2}
	I0920 10:48:13.112030    8649 main.go:141] libmachine: (embed-certs-564000) DBG | Found match: 56:9b:5e:2c:e:8f
	I0920 10:48:13.112039    8649 main.go:141] libmachine: (embed-certs-564000) DBG | IP: 192.168.64.42
	I0920 10:48:13.112087    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetConfigRaw
	I0920 10:48:13.112706    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetIP
	I0920 10:48:13.112885    8649 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/embed-certs-564000/config.json ...
	I0920 10:48:13.113183    8649 machine.go:88] provisioning docker machine ...
	I0920 10:48:13.113194    8649 main.go:141] libmachine: (embed-certs-564000) Calling .DriverName
	I0920 10:48:13.113310    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetMachineName
	I0920 10:48:13.113430    8649 buildroot.go:166] provisioning hostname "embed-certs-564000"
	I0920 10:48:13.113441    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetMachineName
	I0920 10:48:13.113554    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHHostname
	I0920 10:48:13.113635    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHPort
	I0920 10:48:13.113734    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHKeyPath
	I0920 10:48:13.113825    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHKeyPath
	I0920 10:48:13.113925    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHUsername
	I0920 10:48:13.114046    8649 main.go:141] libmachine: Using SSH client type: native
	I0920 10:48:13.114355    8649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.42 22 <nil> <nil>}
	I0920 10:48:13.114365    8649 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-564000 && echo "embed-certs-564000" | sudo tee /etc/hostname
	I0920 10:48:13.116472    8649 main.go:141] libmachine: (embed-certs-564000) DBG | 2023/09/20 10:48:13 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0920 10:48:13.123697    8649 main.go:141] libmachine: (embed-certs-564000) DBG | 2023/09/20 10:48:13 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0920 10:48:13.124437    8649 main.go:141] libmachine: (embed-certs-564000) DBG | 2023/09/20 10:48:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0920 10:48:13.124456    8649 main.go:141] libmachine: (embed-certs-564000) DBG | 2023/09/20 10:48:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0920 10:48:13.124470    8649 main.go:141] libmachine: (embed-certs-564000) DBG | 2023/09/20 10:48:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0920 10:48:13.124485    8649 main.go:141] libmachine: (embed-certs-564000) DBG | 2023/09/20 10:48:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0920 10:48:13.491387    8649 main.go:141] libmachine: (embed-certs-564000) DBG | 2023/09/20 10:48:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0920 10:48:13.491403    8649 main.go:141] libmachine: (embed-certs-564000) DBG | 2023/09/20 10:48:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0920 10:48:13.595585    8649 main.go:141] libmachine: (embed-certs-564000) DBG | 2023/09/20 10:48:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0920 10:48:13.595601    8649 main.go:141] libmachine: (embed-certs-564000) DBG | 2023/09/20 10:48:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0920 10:48:13.595614    8649 main.go:141] libmachine: (embed-certs-564000) DBG | 2023/09/20 10:48:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0920 10:48:13.595624    8649 main.go:141] libmachine: (embed-certs-564000) DBG | 2023/09/20 10:48:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0920 10:48:13.596538    8649 main.go:141] libmachine: (embed-certs-564000) DBG | 2023/09/20 10:48:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0920 10:48:13.596551    8649 main.go:141] libmachine: (embed-certs-564000) DBG | 2023/09/20 10:48:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0920 10:48:18.486085    8649 main.go:141] libmachine: (embed-certs-564000) DBG | 2023/09/20 10:48:18 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0920 10:48:18.486108    8649 main.go:141] libmachine: (embed-certs-564000) DBG | 2023/09/20 10:48:18 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0920 10:48:18.486120    8649 main.go:141] libmachine: (embed-certs-564000) DBG | 2023/09/20 10:48:18 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0920 10:48:26.318852    8649 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-564000
	
	I0920 10:48:26.318872    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHHostname
	I0920 10:48:26.319049    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHPort
	I0920 10:48:26.319160    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHKeyPath
	I0920 10:48:26.319269    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHKeyPath
	I0920 10:48:26.319367    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHUsername
	I0920 10:48:26.319502    8649 main.go:141] libmachine: Using SSH client type: native
	I0920 10:48:26.319758    8649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.42 22 <nil> <nil>}
	I0920 10:48:26.319770    8649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-564000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-564000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-564000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 10:48:26.400394    8649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 10:48:26.400412    8649 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15927-1321/.minikube CaCertPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15927-1321/.minikube}
	I0920 10:48:26.400430    8649 buildroot.go:174] setting up certificates
	I0920 10:48:26.400444    8649 provision.go:83] configureAuth start
	I0920 10:48:26.400452    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetMachineName
	I0920 10:48:26.400582    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetIP
	I0920 10:48:26.400693    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHHostname
	I0920 10:48:26.400782    8649 provision.go:138] copyHostCerts
	I0920 10:48:26.400865    8649 exec_runner.go:144] found /Users/jenkins/minikube-integration/15927-1321/.minikube/ca.pem, removing ...
	I0920 10:48:26.400875    8649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15927-1321/.minikube/ca.pem
	I0920 10:48:26.401009    8649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15927-1321/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15927-1321/.minikube/ca.pem (1082 bytes)
	I0920 10:48:26.401269    8649 exec_runner.go:144] found /Users/jenkins/minikube-integration/15927-1321/.minikube/cert.pem, removing ...
	I0920 10:48:26.401280    8649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15927-1321/.minikube/cert.pem
	I0920 10:48:26.401465    8649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15927-1321/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15927-1321/.minikube/cert.pem (1123 bytes)
	I0920 10:48:26.401646    8649 exec_runner.go:144] found /Users/jenkins/minikube-integration/15927-1321/.minikube/key.pem, removing ...
	I0920 10:48:26.401653    8649 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15927-1321/.minikube/key.pem
	I0920 10:48:26.401722    8649 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15927-1321/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15927-1321/.minikube/key.pem (1675 bytes)
	I0920 10:48:26.401859    8649 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15927-1321/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15927-1321/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15927-1321/.minikube/certs/ca-key.pem org=jenkins.embed-certs-564000 san=[192.168.64.42 192.168.64.42 localhost 127.0.0.1 minikube embed-certs-564000]
	I0920 10:48:26.557453    8649 provision.go:172] copyRemoteCerts
	I0920 10:48:26.557518    8649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 10:48:26.557537    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHHostname
	I0920 10:48:26.557685    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHPort
	I0920 10:48:26.557803    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHKeyPath
	I0920 10:48:26.557898    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHUsername
	I0920 10:48:26.557985    8649 sshutil.go:53] new ssh client: &{IP:192.168.64.42 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/id_rsa Username:docker}
	I0920 10:48:26.602684    8649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15927-1321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 10:48:26.618819    8649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15927-1321/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0920 10:48:26.634890    8649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15927-1321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 10:48:26.650453    8649 provision.go:86] duration metric: configureAuth took 249.989635ms
	I0920 10:48:26.650468    8649 buildroot.go:189] setting minikube options for container-runtime
	I0920 10:48:26.650642    8649 config.go:182] Loaded profile config "embed-certs-564000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0920 10:48:26.650656    8649 main.go:141] libmachine: (embed-certs-564000) Calling .DriverName
	I0920 10:48:26.650795    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHHostname
	I0920 10:48:26.650878    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHPort
	I0920 10:48:26.650966    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHKeyPath
	I0920 10:48:26.651042    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHKeyPath
	I0920 10:48:26.651121    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHUsername
	I0920 10:48:26.651226    8649 main.go:141] libmachine: Using SSH client type: native
	I0920 10:48:26.651466    8649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.42 22 <nil> <nil>}
	I0920 10:48:26.651477    8649 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 10:48:26.729182    8649 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0920 10:48:26.729195    8649 buildroot.go:70] root file system type: tmpfs
	I0920 10:48:26.729280    8649 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 10:48:26.729294    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHHostname
	I0920 10:48:26.729424    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHPort
	I0920 10:48:26.729524    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHKeyPath
	I0920 10:48:26.729624    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHKeyPath
	I0920 10:48:26.729716    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHUsername
	I0920 10:48:26.729851    8649 main.go:141] libmachine: Using SSH client type: native
	I0920 10:48:26.730096    8649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.42 22 <nil> <nil>}
	I0920 10:48:26.730144    8649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 10:48:26.812978    8649 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 10:48:26.813001    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHHostname
	I0920 10:48:26.813151    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHPort
	I0920 10:48:26.813252    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHKeyPath
	I0920 10:48:26.813335    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHKeyPath
	I0920 10:48:26.813427    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHUsername
	I0920 10:48:26.813558    8649 main.go:141] libmachine: Using SSH client type: native
	I0920 10:48:26.813802    8649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.42 22 <nil> <nil>}
	I0920 10:48:26.813815    8649 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 10:48:27.403426    8649 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0920 10:48:27.403453    8649 machine.go:91] provisioned docker machine in 14.289977947s
	I0920 10:48:27.403463    8649 start.go:300] post-start starting for "embed-certs-564000" (driver="hyperkit")
	I0920 10:48:27.403474    8649 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 10:48:27.403490    8649 main.go:141] libmachine: (embed-certs-564000) Calling .DriverName
	I0920 10:48:27.403684    8649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 10:48:27.403699    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHHostname
	I0920 10:48:27.403799    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHPort
	I0920 10:48:27.403943    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHKeyPath
	I0920 10:48:27.404037    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHUsername
	I0920 10:48:27.404120    8649 sshutil.go:53] new ssh client: &{IP:192.168.64.42 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/id_rsa Username:docker}
	I0920 10:48:27.449966    8649 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 10:48:27.452671    8649 info.go:137] Remote host: Buildroot 2021.02.12
	I0920 10:48:27.452686    8649 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15927-1321/.minikube/addons for local assets ...
	I0920 10:48:27.452774    8649 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15927-1321/.minikube/files for local assets ...
	I0920 10:48:27.452945    8649 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15927-1321/.minikube/files/etc/ssl/certs/17842.pem -> 17842.pem in /etc/ssl/certs
	I0920 10:48:27.453137    8649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 10:48:27.459507    8649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15927-1321/.minikube/files/etc/ssl/certs/17842.pem --> /etc/ssl/certs/17842.pem (1708 bytes)
	I0920 10:48:27.475184    8649 start.go:303] post-start completed in 71.709959ms
	I0920 10:48:27.475202    8649 fix.go:56] fixHost completed within 14.472639375s
	I0920 10:48:27.475217    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHHostname
	I0920 10:48:27.475353    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHPort
	I0920 10:48:27.475440    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHKeyPath
	I0920 10:48:27.475527    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHKeyPath
	I0920 10:48:27.475620    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHUsername
	I0920 10:48:27.475731    8649 main.go:141] libmachine: Using SSH client type: native
	I0920 10:48:27.475973    8649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.42 22 <nil> <nil>}
	I0920 10:48:27.475981    8649 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0920 10:48:27.554477    8649 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695232107.038704302
	
	I0920 10:48:27.554487    8649 fix.go:206] guest clock: 1695232107.038704302
	I0920 10:48:27.554492    8649 fix.go:219] Guest: 2023-09-20 10:48:27.038704302 -0700 PDT Remote: 2023-09-20 10:48:27.475206 -0700 PDT m=+14.991904999 (delta=-436.501698ms)
	I0920 10:48:27.554512    8649 fix.go:190] guest clock delta is within tolerance: -436.501698ms
	I0920 10:48:27.554515    8649 start.go:83] releasing machines lock for "embed-certs-564000", held for 14.551989687s
	I0920 10:48:27.554535    8649 main.go:141] libmachine: (embed-certs-564000) Calling .DriverName
	I0920 10:48:27.554669    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetIP
	I0920 10:48:27.554771    8649 main.go:141] libmachine: (embed-certs-564000) Calling .DriverName
	I0920 10:48:27.555104    8649 main.go:141] libmachine: (embed-certs-564000) Calling .DriverName
	I0920 10:48:27.555210    8649 main.go:141] libmachine: (embed-certs-564000) Calling .DriverName
	I0920 10:48:27.555302    8649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 10:48:27.555334    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHHostname
	I0920 10:48:27.555376    8649 ssh_runner.go:195] Run: cat /version.json
	I0920 10:48:27.555388    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHHostname
	I0920 10:48:27.555429    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHPort
	I0920 10:48:27.555488    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHPort
	I0920 10:48:27.555532    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHKeyPath
	I0920 10:48:27.555608    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHUsername
	I0920 10:48:27.555627    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHKeyPath
	I0920 10:48:27.555688    8649 sshutil.go:53] new ssh client: &{IP:192.168.64.42 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/id_rsa Username:docker}
	I0920 10:48:27.555742    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHUsername
	I0920 10:48:27.555820    8649 sshutil.go:53] new ssh client: &{IP:192.168.64.42 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/id_rsa Username:docker}
	I0920 10:48:27.595912    8649 ssh_runner.go:195] Run: systemctl --version
	I0920 10:48:27.641898    8649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 10:48:27.645495    8649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 10:48:27.645552    8649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 10:48:27.655434    8649 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 10:48:27.655453    8649 start.go:469] detecting cgroup driver to use...
	I0920 10:48:27.655564    8649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 10:48:27.669242    8649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0920 10:48:27.676939    8649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 10:48:27.684471    8649 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 10:48:27.684541    8649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 10:48:27.692101    8649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 10:48:27.699892    8649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 10:48:27.707351    8649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 10:48:27.715125    8649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 10:48:27.722922    8649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 10:48:27.730604    8649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 10:48:27.737595    8649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 10:48:27.744514    8649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:48:27.836424    8649 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 10:48:27.848312    8649 start.go:469] detecting cgroup driver to use...
	I0920 10:48:27.848385    8649 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 10:48:27.857947    8649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 10:48:27.867957    8649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 10:48:27.880911    8649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 10:48:27.889882    8649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 10:48:27.898459    8649 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0920 10:48:27.919551    8649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 10:48:27.929381    8649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 10:48:27.941647    8649 ssh_runner.go:195] Run: which cri-dockerd
	I0920 10:48:27.944206    8649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 10:48:27.951143    8649 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0920 10:48:27.962524    8649 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 10:48:28.046906    8649 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 10:48:28.135010    8649 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 10:48:28.135104    8649 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 10:48:28.147490    8649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:48:28.233423    8649 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 10:48:29.531859    8649 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.29839146s)
	I0920 10:48:29.531922    8649 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 10:48:29.626279    8649 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0920 10:48:29.712314    8649 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 10:48:29.810293    8649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:48:29.904344    8649 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0920 10:48:29.919579    8649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:48:30.020646    8649 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0920 10:48:30.078537    8649 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 10:48:30.078641    8649 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0920 10:48:30.082449    8649 start.go:537] Will wait 60s for crictl version
	I0920 10:48:30.082520    8649 ssh_runner.go:195] Run: which crictl
	I0920 10:48:30.085251    8649 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 10:48:30.121336    8649 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I0920 10:48:30.121419    8649 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 10:48:30.139685    8649 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 10:48:30.199044    8649 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I0920 10:48:30.199079    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetIP
	I0920 10:48:30.199300    8649 ssh_runner.go:195] Run: grep 192.168.64.1	host.minikube.internal$ /etc/hosts
	I0920 10:48:30.201954    8649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 10:48:30.210678    8649 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0920 10:48:30.210742    8649 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 10:48:30.224592    8649 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0920 10:48:30.224613    8649 docker.go:594] Images already preloaded, skipping extraction
	I0920 10:48:30.224684    8649 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 10:48:30.237847    8649 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0920 10:48:30.237869    8649 cache_images.go:84] Images are preloaded, skipping loading
	I0920 10:48:30.237943    8649 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0920 10:48:30.255353    8649 cni.go:84] Creating CNI manager for ""
	I0920 10:48:30.255368    8649 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:48:30.255384    8649 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0920 10:48:30.255401    8649 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.42 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-564000 NodeName:embed-certs-564000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.42"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.42 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 10:48:30.255505    8649 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.64.42
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-564000"
	  kubeletExtraArgs:
	    node-ip: 192.168.64.42
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.64.42"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 10:48:30.255566    8649 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=embed-certs-564000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.42
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:embed-certs-564000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0920 10:48:30.255631    8649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0920 10:48:30.261491    8649 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 10:48:30.261551    8649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 10:48:30.267467    8649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0920 10:48:30.278916    8649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 10:48:30.290475    8649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0920 10:48:30.302079    8649 ssh_runner.go:195] Run: grep 192.168.64.42	control-plane.minikube.internal$ /etc/hosts
	I0920 10:48:30.304588    8649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.42	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 10:48:30.313037    8649 certs.go:56] Setting up /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/embed-certs-564000 for IP: 192.168.64.42
	I0920 10:48:30.313055    8649 certs.go:190] acquiring lock for shared ca certs: {Name:mk0d531b1b829cf21f5e54b0ec2e657031b14909 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:48:30.313226    8649 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15927-1321/.minikube/ca.key
	I0920 10:48:30.313285    8649 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15927-1321/.minikube/proxy-client-ca.key
	I0920 10:48:30.313386    8649 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/embed-certs-564000/client.key
	I0920 10:48:30.313458    8649 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/embed-certs-564000/apiserver.key.7a4a9a43
	I0920 10:48:30.313519    8649 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/embed-certs-564000/proxy-client.key
	I0920 10:48:30.313745    8649 certs.go:437] found cert: /Users/jenkins/minikube-integration/15927-1321/.minikube/certs/Users/jenkins/minikube-integration/15927-1321/.minikube/certs/1784.pem (1338 bytes)
	W0920 10:48:30.313785    8649 certs.go:433] ignoring /Users/jenkins/minikube-integration/15927-1321/.minikube/certs/Users/jenkins/minikube-integration/15927-1321/.minikube/certs/1784_empty.pem, impossibly tiny 0 bytes
	I0920 10:48:30.313796    8649 certs.go:437] found cert: /Users/jenkins/minikube-integration/15927-1321/.minikube/certs/Users/jenkins/minikube-integration/15927-1321/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 10:48:30.313827    8649 certs.go:437] found cert: /Users/jenkins/minikube-integration/15927-1321/.minikube/certs/Users/jenkins/minikube-integration/15927-1321/.minikube/certs/ca.pem (1082 bytes)
	I0920 10:48:30.313861    8649 certs.go:437] found cert: /Users/jenkins/minikube-integration/15927-1321/.minikube/certs/Users/jenkins/minikube-integration/15927-1321/.minikube/certs/cert.pem (1123 bytes)
	I0920 10:48:30.313892    8649 certs.go:437] found cert: /Users/jenkins/minikube-integration/15927-1321/.minikube/certs/Users/jenkins/minikube-integration/15927-1321/.minikube/certs/key.pem (1675 bytes)
	I0920 10:48:30.313958    8649 certs.go:437] found cert: /Users/jenkins/minikube-integration/15927-1321/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15927-1321/.minikube/files/etc/ssl/certs/17842.pem (1708 bytes)
	I0920 10:48:30.314455    8649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/embed-certs-564000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0920 10:48:30.331428    8649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/embed-certs-564000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 10:48:30.348299    8649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/embed-certs-564000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 10:48:30.365300    8649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/embed-certs-564000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 10:48:30.382177    8649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15927-1321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 10:48:30.398999    8649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15927-1321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 10:48:30.415903    8649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15927-1321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 10:48:30.432766    8649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15927-1321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 10:48:30.450116    8649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15927-1321/.minikube/certs/1784.pem --> /usr/share/ca-certificates/1784.pem (1338 bytes)
	I0920 10:48:30.466416    8649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15927-1321/.minikube/files/etc/ssl/certs/17842.pem --> /usr/share/ca-certificates/17842.pem (1708 bytes)
	I0920 10:48:30.483165    8649 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15927-1321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 10:48:30.500083    8649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 10:48:30.511962    8649 ssh_runner.go:195] Run: openssl version
	I0920 10:48:30.515718    8649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 10:48:30.522670    8649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:48:30.525751    8649 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:48:30.525808    8649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:48:30.529566    8649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 10:48:30.536131    8649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1784.pem && ln -fs /usr/share/ca-certificates/1784.pem /etc/ssl/certs/1784.pem"
	I0920 10:48:30.542852    8649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1784.pem
	I0920 10:48:30.545918    8649 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 20 16:58 /usr/share/ca-certificates/1784.pem
	I0920 10:48:30.545968    8649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1784.pem
	I0920 10:48:30.549668    8649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1784.pem /etc/ssl/certs/51391683.0"
	I0920 10:48:30.556340    8649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17842.pem && ln -fs /usr/share/ca-certificates/17842.pem /etc/ssl/certs/17842.pem"
	I0920 10:48:30.563243    8649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17842.pem
	I0920 10:48:30.566299    8649 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 20 16:58 /usr/share/ca-certificates/17842.pem
	I0920 10:48:30.566352    8649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17842.pem
	I0920 10:48:30.570119    8649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17842.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 10:48:30.576960    8649 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0920 10:48:30.579795    8649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 10:48:30.583634    8649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 10:48:30.587393    8649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 10:48:30.591195    8649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 10:48:30.594898    8649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 10:48:30.598621    8649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 10:48:30.602396    8649 kubeadm.go:404] StartCluster: {Name:embed-certs-564000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.2 ClusterName:embed-certs-564000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.42 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress:
Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0920 10:48:30.602507    8649 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 10:48:30.615505    8649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 10:48:30.621354    8649 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0920 10:48:30.621386    8649 kubeadm.go:636] restartCluster start
	I0920 10:48:30.621438    8649 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 10:48:30.627029    8649 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:48:30.627420    8649 kubeconfig.go:135] verify returned: extract IP: "embed-certs-564000" does not appear in /Users/jenkins/minikube-integration/15927-1321/kubeconfig
	I0920 10:48:30.627556    8649 kubeconfig.go:146] "embed-certs-564000" context is missing from /Users/jenkins/minikube-integration/15927-1321/kubeconfig - will repair!
	I0920 10:48:30.627803    8649 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15927-1321/kubeconfig: {Name:mkdf15cc782c7a0a37c7ef1acbc1efbe940392fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:48:30.629150    8649 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 10:48:30.635040    8649 api_server.go:166] Checking apiserver status ...
	I0920 10:48:30.635099    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 10:48:30.643167    8649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:48:30.643178    8649 api_server.go:166] Checking apiserver status ...
	I0920 10:48:30.643236    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 10:48:30.650841    8649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:48:31.151579    8649 api_server.go:166] Checking apiserver status ...
	I0920 10:48:31.151666    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 10:48:31.159395    8649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:48:31.651479    8649 api_server.go:166] Checking apiserver status ...
	I0920 10:48:31.651554    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 10:48:31.659617    8649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:48:32.151792    8649 api_server.go:166] Checking apiserver status ...
	I0920 10:48:32.151856    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 10:48:32.159776    8649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:48:35.904079    8359 kubeadm.go:322] [apiclient] All control plane components are healthy after 28.003218 seconds
	I0920 10:48:35.904233    8359 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 10:48:35.913474    8359 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 10:48:36.429594    8359 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 10:48:36.429718    8359 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-770000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0920 10:48:36.941844    8359 kubeadm.go:322] [bootstrap-token] Using token: agf6no.q3qqne68c9h3ui0g
	I0920 10:48:32.652947    8649 api_server.go:166] Checking apiserver status ...
	I0920 10:48:32.653046    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 10:48:32.662247    8649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:48:33.151180    8649 api_server.go:166] Checking apiserver status ...
	I0920 10:48:33.151274    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 10:48:33.159276    8649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:48:33.651763    8649 api_server.go:166] Checking apiserver status ...
	I0920 10:48:33.651873    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 10:48:33.660169    8649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:48:34.151755    8649 api_server.go:166] Checking apiserver status ...
	I0920 10:48:34.151867    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 10:48:34.161330    8649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:48:34.651021    8649 api_server.go:166] Checking apiserver status ...
	I0920 10:48:34.651132    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 10:48:34.660737    8649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:48:35.152393    8649 api_server.go:166] Checking apiserver status ...
	I0920 10:48:35.152577    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 10:48:35.161893    8649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:48:35.651359    8649 api_server.go:166] Checking apiserver status ...
	I0920 10:48:35.651442    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 10:48:35.659891    8649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:48:36.153064    8649 api_server.go:166] Checking apiserver status ...
	I0920 10:48:36.153245    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 10:48:36.162676    8649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:48:36.651114    8649 api_server.go:166] Checking apiserver status ...
	I0920 10:48:36.651215    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 10:48:36.659579    8649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:48:37.151741    8649 api_server.go:166] Checking apiserver status ...
	I0920 10:48:37.151885    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 10:48:37.161005    8649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:48:37.002062    8359 out.go:204]   - Configuring RBAC rules ...
	I0920 10:48:37.002325    8359 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 10:48:37.006469    8359 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 10:48:37.009750    8359 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 10:48:37.012297    8359 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 10:48:37.014000    8359 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 10:48:37.052235    8359 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0920 10:48:37.353579    8359 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0920 10:48:37.354853    8359 kubeadm.go:322] 
	I0920 10:48:37.354908    8359 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0920 10:48:37.354920    8359 kubeadm.go:322] 
	I0920 10:48:37.354991    8359 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0920 10:48:37.355000    8359 kubeadm.go:322] 
	I0920 10:48:37.355022    8359 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0920 10:48:37.355108    8359 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 10:48:37.355150    8359 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 10:48:37.355160    8359 kubeadm.go:322] 
	I0920 10:48:37.355203    8359 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0920 10:48:37.355278    8359 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 10:48:37.355365    8359 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 10:48:37.355384    8359 kubeadm.go:322] 
	I0920 10:48:37.355465    8359 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0920 10:48:37.355543    8359 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0920 10:48:37.355550    8359 kubeadm.go:322] 
	I0920 10:48:37.355612    8359 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token agf6no.q3qqne68c9h3ui0g \
	I0920 10:48:37.355707    8359 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8c46996a4486c32414f0b67abc9ab1aadba7389bc4a644adbad1798b480c328f \
	I0920 10:48:37.355733    8359 kubeadm.go:322]     --control-plane 	  
	I0920 10:48:37.355741    8359 kubeadm.go:322] 
	I0920 10:48:37.355816    8359 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0920 10:48:37.355826    8359 kubeadm.go:322] 
	I0920 10:48:37.355891    8359 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token agf6no.q3qqne68c9h3ui0g \
	I0920 10:48:37.355964    8359 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8c46996a4486c32414f0b67abc9ab1aadba7389bc4a644adbad1798b480c328f 
	I0920 10:48:37.356649    8359 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0920 10:48:37.356760    8359 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
	I0920 10:48:37.356832    8359 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 10:48:37.356842    8359 cni.go:84] Creating CNI manager for ""
	I0920 10:48:37.356852    8359 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 10:48:37.356866    8359 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 10:48:37.356925    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:37.356925    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=df65c8f75cd36776317ad1fc65dba4e994a4b8ca minikube.k8s.io/name=old-k8s-version-770000 minikube.k8s.io/updated_at=2023_09_20T10_48_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:37.364694    8359 ops.go:34] apiserver oom_adj: -16
	I0920 10:48:37.481506    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:37.556298    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:37.652004    8649 api_server.go:166] Checking apiserver status ...
	I0920 10:48:37.652077    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 10:48:37.660456    8649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:48:38.152759    8649 api_server.go:166] Checking apiserver status ...
	I0920 10:48:38.152838    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 10:48:38.160993    8649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:48:38.652398    8649 api_server.go:166] Checking apiserver status ...
	I0920 10:48:38.652468    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 10:48:38.660483    8649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:48:39.152294    8649 api_server.go:166] Checking apiserver status ...
	I0920 10:48:39.152386    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 10:48:39.160199    8649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:48:39.651199    8649 api_server.go:166] Checking apiserver status ...
	I0920 10:48:39.651288    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 10:48:39.659323    8649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:48:40.152282    8649 api_server.go:166] Checking apiserver status ...
	I0920 10:48:40.152365    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 10:48:40.160211    8649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:48:40.636448    8649 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0920 10:48:40.636520    8649 kubeadm.go:1128] stopping kube-system containers ...
	I0920 10:48:40.636598    8649 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 10:48:40.650768    8649 docker.go:463] Stopping containers: [f6f8beb5c8ce 74f49a72e721 45e4c9ad67a4 297ad0521c7d ad0927bdf2dc eba80d581d64 162a73c26b5a 6d7983612f19 eac7b1173474 0c4377c6883d 1cf5510945f8 496f7116c862 c7f3ce16c400 5f239d4efa4c 55d72ca5cd5e]
	I0920 10:48:40.650844    8649 ssh_runner.go:195] Run: docker stop f6f8beb5c8ce 74f49a72e721 45e4c9ad67a4 297ad0521c7d ad0927bdf2dc eba80d581d64 162a73c26b5a 6d7983612f19 eac7b1173474 0c4377c6883d 1cf5510945f8 496f7116c862 c7f3ce16c400 5f239d4efa4c 55d72ca5cd5e
	I0920 10:48:40.665935    8649 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 10:48:40.676772    8649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 10:48:40.682767    8649 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 10:48:40.682826    8649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 10:48:40.688781    8649 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0920 10:48:40.688792    8649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:48:40.762032    8649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:48:41.571273    8649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:48:41.717461    8649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:48:41.760803    8649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:48:41.801784    8649 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:48:41.801853    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:48:41.812778    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:48:42.324002    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:48:38.121083    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:38.622436    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:39.121160    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:39.621469    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:40.122396    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:40.621935    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:41.121504    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:41.622052    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:42.123100    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:42.621427    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:42.824202    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:48:43.324380    8649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:48:43.338795    8649 api_server.go:72] duration metric: took 1.536980401s to wait for apiserver process to appear ...
	I0920 10:48:43.338808    8649 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:48:43.338829    8649 api_server.go:253] Checking apiserver healthz at https://192.168.64.42:8443/healthz ...
	I0920 10:48:45.950981    8649 api_server.go:279] https://192.168.64.42:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 10:48:45.950997    8649 api_server.go:103] status: https://192.168.64.42:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 10:48:45.951006    8649 api_server.go:253] Checking apiserver healthz at https://192.168.64.42:8443/healthz ...
	I0920 10:48:46.002974    8649 api_server.go:279] https://192.168.64.42:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0920 10:48:46.002994    8649 api_server.go:103] status: https://192.168.64.42:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0920 10:48:46.504856    8649 api_server.go:253] Checking apiserver healthz at https://192.168.64.42:8443/healthz ...
	I0920 10:48:46.508750    8649 api_server.go:279] https://192.168.64.42:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0920 10:48:46.508762    8649 api_server.go:103] status: https://192.168.64.42:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0920 10:48:47.004592    8649 api_server.go:253] Checking apiserver healthz at https://192.168.64.42:8443/healthz ...
	I0920 10:48:47.011265    8649 api_server.go:279] https://192.168.64.42:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0920 10:48:47.011281    8649 api_server.go:103] status: https://192.168.64.42:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0920 10:48:47.503086    8649 api_server.go:253] Checking apiserver healthz at https://192.168.64.42:8443/healthz ...
	I0920 10:48:47.506415    8649 api_server.go:279] https://192.168.64.42:8443/healthz returned 200:
	ok
	I0920 10:48:47.511683    8649 api_server.go:141] control plane version: v1.28.2
	I0920 10:48:47.511694    8649 api_server.go:131] duration metric: took 4.172802964s to wait for apiserver health ...
	I0920 10:48:47.511703    8649 cni.go:84] Creating CNI manager for ""
	I0920 10:48:47.511712    8649 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:48:47.535908    8649 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 10:48:43.122012    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:43.622379    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:44.121416    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:44.622430    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:45.121705    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:45.622135    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:46.121055    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:46.622439    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:47.121076    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:47.622858    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:47.555829    8649 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 10:48:47.563332    8649 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0920 10:48:47.588106    8649 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 10:48:47.594831    8649 system_pods.go:59] 8 kube-system pods found
	I0920 10:48:47.594852    8649 system_pods.go:61] "coredns-5dd5756b68-965pn" [21346e4d-73ec-4290-b4c4-7e9559b1a861] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 10:48:47.594858    8649 system_pods.go:61] "etcd-embed-certs-564000" [6bce091c-2da4-45c4-957c-11256ff08e6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 10:48:47.594863    8649 system_pods.go:61] "kube-apiserver-embed-certs-564000" [fd5103bb-4acb-49c2-9e6f-df94e26c0c64] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 10:48:47.594868    8649 system_pods.go:61] "kube-controller-manager-embed-certs-564000" [0a3c2900-1426-4d63-bca3-b832ad6d63fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 10:48:47.594895    8649 system_pods.go:61] "kube-proxy-766r9" [c5fae77a-e5b2-45e8-9621-646d83db6693] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 10:48:47.594901    8649 system_pods.go:61] "kube-scheduler-embed-certs-564000" [ca305257-1307-4973-b423-b7bc5887c05c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 10:48:47.594906    8649 system_pods.go:61] "metrics-server-57f55c9bc5-zrzfp" [873620d8-03ab-45d9-85b8-d7e6581490c8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 10:48:47.594910    8649 system_pods.go:61] "storage-provisioner" [e04a1dea-8745-42b6-ad39-61e7b914de1f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 10:48:47.594919    8649 system_pods.go:74] duration metric: took 6.802957ms to wait for pod list to return data ...
	I0920 10:48:47.594927    8649 node_conditions.go:102] verifying NodePressure condition ...
	I0920 10:48:47.597689    8649 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0920 10:48:47.597707    8649 node_conditions.go:123] node cpu capacity is 2
	I0920 10:48:47.597729    8649 node_conditions.go:105] duration metric: took 2.796893ms to run NodePressure ...
	I0920 10:48:47.597741    8649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:48:47.826300    8649 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0920 10:48:47.830548    8649 kubeadm.go:787] kubelet initialised
	I0920 10:48:47.830561    8649 kubeadm.go:788] duration metric: took 4.245811ms waiting for restarted kubelet to initialise ...
	I0920 10:48:47.830568    8649 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 10:48:47.835607    8649 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-965pn" in "kube-system" namespace to be "Ready" ...
	I0920 10:48:47.843627    8649 pod_ready.go:97] node "embed-certs-564000" hosting pod "coredns-5dd5756b68-965pn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-564000" has status "Ready":"False"
	I0920 10:48:47.843641    8649 pod_ready.go:81] duration metric: took 8.020145ms waiting for pod "coredns-5dd5756b68-965pn" in "kube-system" namespace to be "Ready" ...
	E0920 10:48:47.843648    8649 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-564000" hosting pod "coredns-5dd5756b68-965pn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-564000" has status "Ready":"False"
	I0920 10:48:47.843654    8649 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-564000" in "kube-system" namespace to be "Ready" ...
	I0920 10:48:47.854057    8649 pod_ready.go:97] node "embed-certs-564000" hosting pod "etcd-embed-certs-564000" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-564000" has status "Ready":"False"
	I0920 10:48:47.854069    8649 pod_ready.go:81] duration metric: took 10.410079ms waiting for pod "etcd-embed-certs-564000" in "kube-system" namespace to be "Ready" ...
	E0920 10:48:47.854076    8649 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-564000" hosting pod "etcd-embed-certs-564000" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-564000" has status "Ready":"False"
	I0920 10:48:47.854081    8649 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-564000" in "kube-system" namespace to be "Ready" ...
	I0920 10:48:47.873178    8649 pod_ready.go:97] node "embed-certs-564000" hosting pod "kube-apiserver-embed-certs-564000" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-564000" has status "Ready":"False"
	I0920 10:48:47.873194    8649 pod_ready.go:81] duration metric: took 19.10787ms waiting for pod "kube-apiserver-embed-certs-564000" in "kube-system" namespace to be "Ready" ...
	E0920 10:48:47.873201    8649 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-564000" hosting pod "kube-apiserver-embed-certs-564000" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-564000" has status "Ready":"False"
	I0920 10:48:47.873208    8649 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-564000" in "kube-system" namespace to be "Ready" ...
	I0920 10:48:47.992297    8649 pod_ready.go:97] node "embed-certs-564000" hosting pod "kube-controller-manager-embed-certs-564000" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-564000" has status "Ready":"False"
	I0920 10:48:47.992313    8649 pod_ready.go:81] duration metric: took 119.097701ms waiting for pod "kube-controller-manager-embed-certs-564000" in "kube-system" namespace to be "Ready" ...
	E0920 10:48:47.992320    8649 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-564000" hosting pod "kube-controller-manager-embed-certs-564000" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-564000" has status "Ready":"False"
	I0920 10:48:47.992326    8649 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-766r9" in "kube-system" namespace to be "Ready" ...
	I0920 10:48:48.392323    8649 pod_ready.go:97] node "embed-certs-564000" hosting pod "kube-proxy-766r9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-564000" has status "Ready":"False"
	I0920 10:48:48.392335    8649 pod_ready.go:81] duration metric: took 399.997422ms waiting for pod "kube-proxy-766r9" in "kube-system" namespace to be "Ready" ...
	E0920 10:48:48.392341    8649 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-564000" hosting pod "kube-proxy-766r9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-564000" has status "Ready":"False"
	I0920 10:48:48.392346    8649 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-564000" in "kube-system" namespace to be "Ready" ...
	I0920 10:48:48.791830    8649 pod_ready.go:97] node "embed-certs-564000" hosting pod "kube-scheduler-embed-certs-564000" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-564000" has status "Ready":"False"
	I0920 10:48:48.791847    8649 pod_ready.go:81] duration metric: took 399.48852ms waiting for pod "kube-scheduler-embed-certs-564000" in "kube-system" namespace to be "Ready" ...
	E0920 10:48:48.791856    8649 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-564000" hosting pod "kube-scheduler-embed-certs-564000" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-564000" has status "Ready":"False"
	I0920 10:48:48.791884    8649 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace to be "Ready" ...
	I0920 10:48:49.190384    8649 pod_ready.go:97] node "embed-certs-564000" hosting pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-564000" has status "Ready":"False"
	I0920 10:48:49.190399    8649 pod_ready.go:81] duration metric: took 398.495695ms waiting for pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace to be "Ready" ...
	E0920 10:48:49.190406    8649 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-564000" hosting pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-564000" has status "Ready":"False"
	I0920 10:48:49.190419    8649 pod_ready.go:38] duration metric: took 1.359812078s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 10:48:49.190436    8649 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 10:48:49.198384    8649 ops.go:34] apiserver oom_adj: -16
	I0920 10:48:49.198403    8649 kubeadm.go:640] restartCluster took 18.57665017s
	I0920 10:48:49.198409    8649 kubeadm.go:406] StartCluster complete in 18.595667654s
	I0920 10:48:49.198424    8649 settings.go:142] acquiring lock: {Name:mk81d3f0fa580bbfc06ed1f42a67f5b92eda752c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:48:49.198511    8649 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15927-1321/kubeconfig
	I0920 10:48:49.199252    8649 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15927-1321/kubeconfig: {Name:mkdf15cc782c7a0a37c7ef1acbc1efbe940392fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:48:49.199502    8649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 10:48:49.199522    8649 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0920 10:48:49.199558    8649 addons.go:69] Setting default-storageclass=true in profile "embed-certs-564000"
	I0920 10:48:49.199561    8649 addons.go:69] Setting metrics-server=true in profile "embed-certs-564000"
	I0920 10:48:49.199573    8649 addons.go:231] Setting addon metrics-server=true in "embed-certs-564000"
	I0920 10:48:49.199573    8649 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-564000"
	I0920 10:48:49.199575    8649 addons.go:69] Setting dashboard=true in profile "embed-certs-564000"
	W0920 10:48:49.199579    8649 addons.go:240] addon metrics-server should already be in state true
	I0920 10:48:49.199558    8649 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-564000"
	I0920 10:48:49.199590    8649 addons.go:231] Setting addon dashboard=true in "embed-certs-564000"
	W0920 10:48:49.199597    8649 addons.go:240] addon dashboard should already be in state true
	I0920 10:48:49.199602    8649 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-564000"
	I0920 10:48:49.199614    8649 host.go:66] Checking if "embed-certs-564000" exists ...
	W0920 10:48:49.199615    8649 addons.go:240] addon storage-provisioner should already be in state true
	I0920 10:48:49.199619    8649 host.go:66] Checking if "embed-certs-564000" exists ...
	I0920 10:48:49.199637    8649 config.go:182] Loaded profile config "embed-certs-564000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0920 10:48:49.199649    8649 host.go:66] Checking if "embed-certs-564000" exists ...
	I0920 10:48:49.199850    8649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:48:49.199864    8649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:48:49.199875    8649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:48:49.199892    8649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:48:49.199941    8649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:48:49.199960    8649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:48:49.199978    8649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:48:49.199996    8649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:48:49.208258    8649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56159
	I0920 10:48:49.208654    8649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56161
	I0920 10:48:49.208689    8649 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:48:49.209032    8649 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:48:49.209158    8649 main.go:141] libmachine: Using API Version  1
	I0920 10:48:49.209170    8649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:48:49.209418    8649 main.go:141] libmachine: Using API Version  1
	I0920 10:48:49.209433    8649 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:48:49.209450    8649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:48:49.209713    8649 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:48:49.209902    8649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:48:49.209921    8649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:48:49.210477    8649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:48:49.210746    8649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:48:49.210848    8649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56163
	I0920 10:48:49.213321    8649 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:48:49.213354    8649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56165
	I0920 10:48:49.213831    8649 main.go:141] libmachine: Using API Version  1
	I0920 10:48:49.213841    8649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:48:49.213890    8649 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:48:49.214113    8649 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:48:49.214280    8649 main.go:141] libmachine: Using API Version  1
	I0920 10:48:49.214288    8649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:48:49.214550    8649 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:48:49.214697    8649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:48:49.214718    8649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:48:49.214724    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetState
	I0920 10:48:49.214856    8649 main.go:141] libmachine: (embed-certs-564000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 10:48:49.215439    8649 main.go:141] libmachine: (embed-certs-564000) DBG | hyperkit pid from json: 8660
	I0920 10:48:49.218865    8649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56167
	I0920 10:48:49.219139    8649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56169
	I0920 10:48:49.219339    8649 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:48:49.219464    8649 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-564000" context rescaled to 1 replicas
	I0920 10:48:49.219489    8649 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.64.42 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:48:49.219504    8649 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:48:49.242545    8649 out.go:177] * Verifying Kubernetes components...
	I0920 10:48:49.219793    8649 main.go:141] libmachine: Using API Version  1
	I0920 10:48:49.222630    8649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56171
	I0920 10:48:49.223883    8649 addons.go:231] Setting addon default-storageclass=true in "embed-certs-564000"
	W0920 10:48:49.279719    8649 addons.go:240] addon default-storageclass should already be in state true
	I0920 10:48:49.279719    8649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 10:48:49.219866    8649 main.go:141] libmachine: Using API Version  1
	I0920 10:48:49.279745    8649 host.go:66] Checking if "embed-certs-564000" exists ...
	I0920 10:48:49.242583    8649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:48:49.279760    8649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:48:49.280147    8649 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:48:49.280156    8649 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:48:49.280174    8649 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:48:49.280213    8649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:48:49.280231    8649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:48:49.280387    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetState
	I0920 10:48:49.280397    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetState
	I0920 10:48:49.281181    8649 main.go:141] libmachine: (embed-certs-564000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 10:48:49.281375    8649 main.go:141] libmachine: (embed-certs-564000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 10:48:49.281424    8649 main.go:141] libmachine: (embed-certs-564000) DBG | hyperkit pid from json: 8660
	I0920 10:48:49.281477    8649 main.go:141] libmachine: (embed-certs-564000) DBG | hyperkit pid from json: 8660
	I0920 10:48:49.281655    8649 main.go:141] libmachine: Using API Version  1
	I0920 10:48:49.281703    8649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:48:49.282483    8649 main.go:141] libmachine: (embed-certs-564000) Calling .DriverName
	I0920 10:48:49.282538    8649 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:48:49.282727    8649 main.go:141] libmachine: (embed-certs-564000) Calling .DriverName
	I0920 10:48:49.282777    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetState
	I0920 10:48:49.319588    8649 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 10:48:49.282898    8649 main.go:141] libmachine: (embed-certs-564000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 10:48:49.284009    8649 main.go:141] libmachine: (embed-certs-564000) Calling .DriverName
	I0920 10:48:49.288434    8649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56173
	I0920 10:48:49.356660    8649 main.go:141] libmachine: (embed-certs-564000) DBG | hyperkit pid from json: 8660
	I0920 10:48:49.356688    8649 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 10:48:49.364049    8649 node_ready.go:35] waiting up to 6m0s for node "embed-certs-564000" to be "Ready" ...
	I0920 10:48:49.364187    8649 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0920 10:48:49.377732    8649 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0920 10:48:49.377806    8649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 10:48:49.378478    8649 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:48:49.398710    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHHostname
	I0920 10:48:49.419479    8649 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0920 10:48:49.440708    8649 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:48:49.461919    8649 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0920 10:48:49.461938    8649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0920 10:48:49.441555    8649 main.go:141] libmachine: Using API Version  1
	I0920 10:48:49.441025    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHPort
	I0920 10:48:49.461968    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHHostname
	I0920 10:48:49.499652    8649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:48:49.499660    8649 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:48:49.499750    8649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 10:48:49.499761    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHHostname
	I0920 10:48:49.499871    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHKeyPath
	I0920 10:48:49.499888    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHPort
	I0920 10:48:49.499952    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHPort
	I0920 10:48:49.500012    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHUsername
	I0920 10:48:49.500018    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHKeyPath
	I0920 10:48:49.500077    8649 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:48:49.500145    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHKeyPath
	I0920 10:48:49.500153    8649 sshutil.go:53] new ssh client: &{IP:192.168.64.42 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/id_rsa Username:docker}
	I0920 10:48:49.500176    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHUsername
	I0920 10:48:49.500318    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHUsername
	I0920 10:48:49.500362    8649 sshutil.go:53] new ssh client: &{IP:192.168.64.42 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/id_rsa Username:docker}
	I0920 10:48:49.500431    8649 sshutil.go:53] new ssh client: &{IP:192.168.64.42 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/id_rsa Username:docker}
	I0920 10:48:49.500544    8649 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:48:49.500572    8649 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:48:49.508427    8649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56178
	I0920 10:48:49.508857    8649 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:48:49.509272    8649 main.go:141] libmachine: Using API Version  1
	I0920 10:48:49.509284    8649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:48:49.509556    8649 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:48:49.509671    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetState
	I0920 10:48:49.509779    8649 main.go:141] libmachine: (embed-certs-564000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 10:48:49.509946    8649 main.go:141] libmachine: (embed-certs-564000) DBG | hyperkit pid from json: 8660
	I0920 10:48:49.510986    8649 main.go:141] libmachine: (embed-certs-564000) Calling .DriverName
	I0920 10:48:49.511179    8649 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 10:48:49.511187    8649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 10:48:49.511196    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHHostname
	I0920 10:48:49.511283    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHPort
	I0920 10:48:49.511364    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHKeyPath
	I0920 10:48:49.511463    8649 main.go:141] libmachine: (embed-certs-564000) Calling .GetSSHUsername
	I0920 10:48:49.511556    8649 sshutil.go:53] new ssh client: &{IP:192.168.64.42 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/embed-certs-564000/id_rsa Username:docker}
	I0920 10:48:49.562787    8649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:48:49.563423    8649 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 10:48:49.563432    8649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 10:48:49.580471    8649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 10:48:49.586503    8649 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0920 10:48:49.586515    8649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0920 10:48:49.592815    8649 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 10:48:49.592826    8649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 10:48:49.606150    8649 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 10:48:49.606161    8649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 10:48:49.626920    8649 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0920 10:48:49.626937    8649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0920 10:48:49.656870    8649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 10:48:49.677495    8649 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0920 10:48:49.677508    8649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0920 10:48:49.717000    8649 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0920 10:48:49.717012    8649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0920 10:48:49.794309    8649 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0920 10:48:49.794321    8649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0920 10:48:49.860117    8649 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0920 10:48:49.860129    8649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0920 10:48:49.889694    8649 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0920 10:48:49.889707    8649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0920 10:48:49.921101    8649 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0920 10:48:49.921114    8649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0920 10:48:49.932899    8649 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0920 10:48:49.932910    8649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0920 10:48:49.944453    8649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0920 10:48:50.594686    8649 node_ready.go:49] node "embed-certs-564000" has status "Ready":"True"
	I0920 10:48:50.594704    8649 node_ready.go:38] duration metric: took 1.216870079s waiting for node "embed-certs-564000" to be "Ready" ...
	I0920 10:48:50.594711    8649 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 10:48:50.599338    8649 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-965pn" in "kube-system" namespace to be "Ready" ...
	I0920 10:48:50.866733    8649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.286219872s)
	I0920 10:48:50.866766    8649 main.go:141] libmachine: Making call to close driver server
	I0920 10:48:50.866786    8649 main.go:141] libmachine: (embed-certs-564000) Calling .Close
	I0920 10:48:50.866866    8649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.304032456s)
	I0920 10:48:50.866888    8649 main.go:141] libmachine: Making call to close driver server
	I0920 10:48:50.866895    8649 main.go:141] libmachine: (embed-certs-564000) Calling .Close
	I0920 10:48:50.866956    8649 main.go:141] libmachine: Successfully made call to close driver server
	I0920 10:48:50.866970    8649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 10:48:50.866978    8649 main.go:141] libmachine: Making call to close driver server
	I0920 10:48:50.866984    8649 main.go:141] libmachine: (embed-certs-564000) DBG | Closing plugin on server side
	I0920 10:48:50.866987    8649 main.go:141] libmachine: (embed-certs-564000) Calling .Close
	I0920 10:48:50.867068    8649 main.go:141] libmachine: Successfully made call to close driver server
	I0920 10:48:50.867081    8649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 10:48:50.867089    8649 main.go:141] libmachine: Making call to close driver server
	I0920 10:48:50.867099    8649 main.go:141] libmachine: (embed-certs-564000) Calling .Close
	I0920 10:48:50.867103    8649 main.go:141] libmachine: Successfully made call to close driver server
	I0920 10:48:50.867107    8649 main.go:141] libmachine: (embed-certs-564000) DBG | Closing plugin on server side
	I0920 10:48:50.867115    8649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 10:48:50.867129    8649 main.go:141] libmachine: Making call to close driver server
	I0920 10:48:50.867142    8649 main.go:141] libmachine: (embed-certs-564000) Calling .Close
	I0920 10:48:50.867278    8649 main.go:141] libmachine: Successfully made call to close driver server
	I0920 10:48:50.867302    8649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 10:48:50.867306    8649 main.go:141] libmachine: (embed-certs-564000) DBG | Closing plugin on server side
	I0920 10:48:50.867336    8649 main.go:141] libmachine: Successfully made call to close driver server
	I0920 10:48:50.867342    8649 main.go:141] libmachine: (embed-certs-564000) DBG | Closing plugin on server side
	I0920 10:48:50.867347    8649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 10:48:50.868170    8649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.211254495s)
	I0920 10:48:50.868189    8649 main.go:141] libmachine: Making call to close driver server
	I0920 10:48:50.868195    8649 main.go:141] libmachine: (embed-certs-564000) Calling .Close
	I0920 10:48:50.868324    8649 main.go:141] libmachine: (embed-certs-564000) DBG | Closing plugin on server side
	I0920 10:48:50.868344    8649 main.go:141] libmachine: Successfully made call to close driver server
	I0920 10:48:50.868363    8649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 10:48:50.868373    8649 main.go:141] libmachine: Making call to close driver server
	I0920 10:48:50.868384    8649 main.go:141] libmachine: (embed-certs-564000) Calling .Close
	I0920 10:48:50.868492    8649 main.go:141] libmachine: Successfully made call to close driver server
	I0920 10:48:50.868506    8649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 10:48:50.868512    8649 addons.go:467] Verifying addon metrics-server=true in "embed-certs-564000"
	I0920 10:48:50.868518    8649 main.go:141] libmachine: (embed-certs-564000) DBG | Closing plugin on server side
	I0920 10:48:51.268577    8649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.324069771s)
	I0920 10:48:51.268606    8649 main.go:141] libmachine: Making call to close driver server
	I0920 10:48:51.268613    8649 main.go:141] libmachine: (embed-certs-564000) Calling .Close
	I0920 10:48:51.268814    8649 main.go:141] libmachine: Successfully made call to close driver server
	I0920 10:48:51.268825    8649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 10:48:51.268834    8649 main.go:141] libmachine: (embed-certs-564000) DBG | Closing plugin on server side
	I0920 10:48:51.268843    8649 main.go:141] libmachine: Making call to close driver server
	I0920 10:48:51.268850    8649 main.go:141] libmachine: (embed-certs-564000) Calling .Close
	I0920 10:48:51.269009    8649 main.go:141] libmachine: Successfully made call to close driver server
	I0920 10:48:51.269009    8649 main.go:141] libmachine: (embed-certs-564000) DBG | Closing plugin on server side
	I0920 10:48:51.269021    8649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 10:48:51.292086    8649 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-564000 addons enable metrics-server	
	
	
	I0920 10:48:51.350782    8649 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0920 10:48:51.431542    8649 addons.go:502] enable addons completed in 2.231703762s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0920 10:48:48.121476    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:48.621034    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:49.121190    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:49.622120    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:50.121509    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:50.622688    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:51.122476    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:51.621971    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:52.122310    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:52.622071    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:53.122164    8359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:48:53.190122    8359 kubeadm.go:1081] duration metric: took 15.832945423s to wait for elevateKubeSystemPrivileges.
	I0920 10:48:53.190139    8359 kubeadm.go:406] StartCluster complete in 6m25.788440803s
	I0920 10:48:53.190155    8359 settings.go:142] acquiring lock: {Name:mk81d3f0fa580bbfc06ed1f42a67f5b92eda752c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:48:53.190234    8359 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15927-1321/kubeconfig
	I0920 10:48:53.191070    8359 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15927-1321/kubeconfig: {Name:mkdf15cc782c7a0a37c7ef1acbc1efbe940392fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:48:53.191295    8359 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 10:48:53.191302    8359 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0920 10:48:53.191344    8359 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-770000"
	I0920 10:48:53.191352    8359 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-770000"
	I0920 10:48:53.191361    8359 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-770000"
	I0920 10:48:53.191368    8359 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-770000"
	W0920 10:48:53.191369    8359 addons.go:240] addon storage-provisioner should already be in state true
	I0920 10:48:53.191365    8359 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-770000"
	I0920 10:48:53.191382    8359 addons.go:69] Setting dashboard=true in profile "old-k8s-version-770000"
	I0920 10:48:53.191383    8359 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-770000"
	I0920 10:48:53.191399    8359 addons.go:231] Setting addon dashboard=true in "old-k8s-version-770000"
	I0920 10:48:53.191409    8359 host.go:66] Checking if "old-k8s-version-770000" exists ...
	W0920 10:48:53.191409    8359 addons.go:240] addon metrics-server should already be in state true
	I0920 10:48:53.191416    8359 config.go:182] Loaded profile config "old-k8s-version-770000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0920 10:48:53.191439    8359 host.go:66] Checking if "old-k8s-version-770000" exists ...
	W0920 10:48:53.191412    8359 addons.go:240] addon dashboard should already be in state true
	I0920 10:48:53.191500    8359 host.go:66] Checking if "old-k8s-version-770000" exists ...
	I0920 10:48:53.191669    8359 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:48:53.191805    8359 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:48:53.191890    8359 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:48:53.191930    8359 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:48:53.192182    8359 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:48:53.193148    8359 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:48:53.193258    8359 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:48:53.193375    8359 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:48:53.202322    8359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56182
	I0920 10:48:53.202742    8359 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:48:53.203143    8359 main.go:141] libmachine: Using API Version  1
	I0920 10:48:53.203159    8359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:48:53.203283    8359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56184
	I0920 10:48:53.203479    8359 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:48:53.203675    8359 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:48:53.203960    8359 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:48:53.204000    8359 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:48:53.204122    8359 main.go:141] libmachine: Using API Version  1
	I0920 10:48:53.204135    8359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:48:53.204842    8359 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:48:53.205094    8359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56186
	I0920 10:48:53.206259    8359 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:48:53.206290    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .GetState
	I0920 10:48:53.206580    8359 main.go:141] libmachine: (old-k8s-version-770000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 10:48:53.206637    8359 main.go:141] libmachine: (old-k8s-version-770000) DBG | hyperkit pid from json: 8372
	I0920 10:48:53.206714    8359 main.go:141] libmachine: Using API Version  1
	I0920 10:48:53.206749    8359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:48:53.206983    8359 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:48:53.207499    8359 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:48:53.207534    8359 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:48:53.208685    8359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56188
	I0920 10:48:53.210052    8359 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:48:53.210578    8359 main.go:141] libmachine: Using API Version  1
	I0920 10:48:53.210601    8359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:48:53.210931    8359 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:48:53.211338    8359 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:48:53.211362    8359 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:48:53.214182    8359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56190
	I0920 10:48:53.214569    8359 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:48:53.214993    8359 main.go:141] libmachine: Using API Version  1
	I0920 10:48:53.215016    8359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:48:53.215279    8359 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:48:53.215401    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .GetState
	I0920 10:48:53.215502    8359 main.go:141] libmachine: (old-k8s-version-770000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 10:48:53.215575    8359 main.go:141] libmachine: (old-k8s-version-770000) DBG | hyperkit pid from json: 8372
	I0920 10:48:53.215995    8359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56192
	I0920 10:48:53.216346    8359 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:48:53.216594    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .DriverName
	I0920 10:48:53.216680    8359 main.go:141] libmachine: Using API Version  1
	I0920 10:48:53.216693    8359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:48:53.255615    8359 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 10:48:53.216950    8359 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:48:53.218964    8359 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-770000"
	I0920 10:48:53.219227    8359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56194
	I0920 10:48:53.257117    8359 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-770000" context rescaled to 1 replicas
	I0920 10:48:53.276768    8359 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.64.40 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	W0920 10:48:53.276797    8359 addons.go:240] addon default-storageclass should already be in state true
	I0920 10:48:53.313601    8359 out.go:177] * Verifying Kubernetes components...
	I0920 10:48:53.276830    8359 host.go:66] Checking if "old-k8s-version-770000" exists ...
	I0920 10:48:53.276847    8359 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 10:48:53.350573    8359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 10:48:53.350561    8359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 10:48:53.276982    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .GetState
	I0920 10:48:53.350600    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .GetSSHHostname
	I0920 10:48:53.277186    8359 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:48:53.313931    8359 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:48:53.350676    8359 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:48:53.350844    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .GetSSHPort
	I0920 10:48:53.350863    8359 main.go:141] libmachine: (old-k8s-version-770000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 10:48:53.351810    8359 main.go:141] libmachine: (old-k8s-version-770000) DBG | hyperkit pid from json: 8372
	I0920 10:48:53.351837    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .GetSSHKeyPath
	I0920 10:48:53.352229    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .GetSSHUsername
	I0920 10:48:53.353455    8359 sshutil.go:53] new ssh client: &{IP:192.168.64.40 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/old-k8s-version-770000/id_rsa Username:docker}
	I0920 10:48:53.353482    8359 main.go:141] libmachine: Using API Version  1
	I0920 10:48:53.353501    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .DriverName
	I0920 10:48:53.353504    8359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:48:53.390625    8359 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:48:53.353822    8359 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:48:53.358802    8359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56197
	I0920 10:48:53.362636    8359 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.64.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 10:48:53.365737    8359 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-770000" to be "Ready" ...
	I0920 10:48:53.414670    8359 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 10:48:53.427678    8359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 10:48:53.427785    8359 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:48:53.427793    8359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 10:48:53.427874    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .GetSSHHostname
	I0920 10:48:53.427876    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .GetState
	I0920 10:48:53.428065    8359 main.go:141] libmachine: (old-k8s-version-770000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 10:48:53.428075    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .GetSSHPort
	I0920 10:48:53.428200    8359 main.go:141] libmachine: (old-k8s-version-770000) DBG | hyperkit pid from json: 8372
	I0920 10:48:53.428209    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .GetSSHKeyPath
	I0920 10:48:53.428264    8359 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:48:53.428342    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .GetSSHUsername
	I0920 10:48:53.428459    8359 sshutil.go:53] new ssh client: &{IP:192.168.64.40 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/old-k8s-version-770000/id_rsa Username:docker}
	I0920 10:48:53.428648    8359 main.go:141] libmachine: Using API Version  1
	I0920 10:48:53.428661    8359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:48:53.428994    8359 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:48:53.429324    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .DriverName
	I0920 10:48:53.429390    8359 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:48:53.429414    8359 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:48:53.436973    8359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56200
	I0920 10:48:53.442771    8359 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 10:48:53.467575    8359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 10:48:53.467530    8359 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0920 10:48:53.487546    8359 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 10:48:53.467928    8359 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:48:53.487582    8359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 10:48:53.524589    8359 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0920 10:48:53.487992    8359 main.go:141] libmachine: Using API Version  1
	I0920 10:48:53.523310    8359 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:48:53.525190    8359 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 10:48:53.561805    8359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:48:53.561862    8359 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0920 10:48:53.561879    8359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0920 10:48:53.561896    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .GetSSHHostname
	I0920 10:48:53.562049    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .GetSSHPort
	I0920 10:48:53.562113    8359 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:48:53.562161    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .GetSSHKeyPath
	I0920 10:48:53.562238    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .GetState
	I0920 10:48:53.562307    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .GetSSHUsername
	I0920 10:48:53.562349    8359 main.go:141] libmachine: (old-k8s-version-770000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 10:48:53.562434    8359 main.go:141] libmachine: (old-k8s-version-770000) DBG | hyperkit pid from json: 8372
	I0920 10:48:53.562441    8359 sshutil.go:53] new ssh client: &{IP:192.168.64.40 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/old-k8s-version-770000/id_rsa Username:docker}
	I0920 10:48:53.563384    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .DriverName
	I0920 10:48:53.563536    8359 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 10:48:53.563545    8359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 10:48:53.563553    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .GetSSHHostname
	I0920 10:48:53.563636    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .GetSSHPort
	I0920 10:48:53.563728    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .GetSSHKeyPath
	I0920 10:48:53.563805    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .GetSSHUsername
	I0920 10:48:53.563897    8359 sshutil.go:53] new ssh client: &{IP:192.168.64.40 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/old-k8s-version-770000/id_rsa Username:docker}
	I0920 10:48:53.598815    8359 node_ready.go:49] node "old-k8s-version-770000" has status "Ready":"True"
	I0920 10:48:53.598831    8359 node_ready.go:38] duration metric: took 171.110039ms waiting for node "old-k8s-version-770000" to be "Ready" ...
	I0920 10:48:53.598841    8359 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 10:48:53.609368    8359 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-p222s" in "kube-system" namespace to be "Ready" ...
	I0920 10:48:53.692131    8359 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0920 10:48:53.692144    8359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0920 10:48:53.797094    8359 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 10:48:53.798331    8359 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0920 10:48:53.798341    8359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0920 10:48:53.901595    8359 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0920 10:48:53.901607    8359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0920 10:48:53.908419    8359 start.go:917] {"host.minikube.internal": 192.168.64.1} host record injected into CoreDNS's ConfigMap
	I0920 10:48:53.935696    8359 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0920 10:48:53.935708    8359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0920 10:48:53.978414    8359 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0920 10:48:53.978429    8359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0920 10:48:54.052055    8359 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0920 10:48:54.052070    8359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0920 10:48:54.143679    8359 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0920 10:48:54.143692    8359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0920 10:48:54.168064    8359 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0920 10:48:54.168078    8359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0920 10:48:54.190890    8359 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0920 10:48:54.190903    8359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0920 10:48:54.215191    8359 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0920 10:48:54.422662    8359 main.go:141] libmachine: Making call to close driver server
	I0920 10:48:54.422681    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .Close
	I0920 10:48:54.422883    8359 main.go:141] libmachine: Successfully made call to close driver server
	I0920 10:48:54.422891    8359 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 10:48:54.422898    8359 main.go:141] libmachine: Making call to close driver server
	I0920 10:48:54.422903    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .Close
	I0920 10:48:54.422902    8359 main.go:141] libmachine: (old-k8s-version-770000) DBG | Closing plugin on server side
	I0920 10:48:54.423040    8359 main.go:141] libmachine: (old-k8s-version-770000) DBG | Closing plugin on server side
	I0920 10:48:54.423086    8359 main.go:141] libmachine: Successfully made call to close driver server
	I0920 10:48:54.423096    8359 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 10:48:54.491220    8359 main.go:141] libmachine: Making call to close driver server
	I0920 10:48:54.491237    8359 main.go:141] libmachine: Making call to close driver server
	I0920 10:48:54.491240    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .Close
	I0920 10:48:54.491247    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .Close
	I0920 10:48:54.491425    8359 main.go:141] libmachine: Successfully made call to close driver server
	I0920 10:48:54.491430    8359 main.go:141] libmachine: Successfully made call to close driver server
	I0920 10:48:54.491436    8359 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 10:48:54.491436    8359 main.go:141] libmachine: (old-k8s-version-770000) DBG | Closing plugin on server side
	I0920 10:48:54.491440    8359 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 10:48:54.491450    8359 main.go:141] libmachine: (old-k8s-version-770000) DBG | Closing plugin on server side
	I0920 10:48:54.491451    8359 main.go:141] libmachine: Making call to close driver server
	I0920 10:48:54.491452    8359 main.go:141] libmachine: Making call to close driver server
	I0920 10:48:54.491503    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .Close
	I0920 10:48:54.491512    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .Close
	I0920 10:48:54.491644    8359 main.go:141] libmachine: Successfully made call to close driver server
	I0920 10:48:54.491654    8359 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 10:48:54.491667    8359 main.go:141] libmachine: (old-k8s-version-770000) DBG | Closing plugin on server side
	I0920 10:48:54.491696    8359 main.go:141] libmachine: (old-k8s-version-770000) DBG | Closing plugin on server side
	I0920 10:48:54.491704    8359 main.go:141] libmachine: Successfully made call to close driver server
	I0920 10:48:54.491668    8359 main.go:141] libmachine: Making call to close driver server
	I0920 10:48:54.491715    8359 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 10:48:54.491731    8359 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-770000"
	I0920 10:48:54.491743    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .Close
	I0920 10:48:54.491893    8359 main.go:141] libmachine: Successfully made call to close driver server
	I0920 10:48:54.491903    8359 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 10:48:54.491925    8359 main.go:141] libmachine: (old-k8s-version-770000) DBG | Closing plugin on server side
	I0920 10:48:54.843436    8359 main.go:141] libmachine: Making call to close driver server
	I0920 10:48:54.843457    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .Close
	I0920 10:48:54.843620    8359 main.go:141] libmachine: Successfully made call to close driver server
	I0920 10:48:54.843631    8359 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 10:48:54.843663    8359 main.go:141] libmachine: Making call to close driver server
	I0920 10:48:54.843682    8359 main.go:141] libmachine: (old-k8s-version-770000) Calling .Close
	I0920 10:48:54.843681    8359 main.go:141] libmachine: (old-k8s-version-770000) DBG | Closing plugin on server side
	I0920 10:48:54.843868    8359 main.go:141] libmachine: Successfully made call to close driver server
	I0920 10:48:54.843887    8359 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 10:48:54.843898    8359 main.go:141] libmachine: (old-k8s-version-770000) DBG | Closing plugin on server side
	I0920 10:48:54.883722    8359 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-770000 addons enable metrics-server	
	
	
	I0920 10:48:54.956634    8359 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0920 10:48:52.998912    8649 pod_ready.go:102] pod "coredns-5dd5756b68-965pn" in "kube-system" namespace has status "Ready":"False"
	I0920 10:48:55.496131    8649 pod_ready.go:102] pod "coredns-5dd5756b68-965pn" in "kube-system" namespace has status "Ready":"False"
	I0920 10:48:57.495537    8649 pod_ready.go:92] pod "coredns-5dd5756b68-965pn" in "kube-system" namespace has status "Ready":"True"
	I0920 10:48:57.495550    8649 pod_ready.go:81] duration metric: took 6.896067559s waiting for pod "coredns-5dd5756b68-965pn" in "kube-system" namespace to be "Ready" ...
	I0920 10:48:57.495556    8649 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-564000" in "kube-system" namespace to be "Ready" ...
	I0920 10:48:55.030486    8359 addons.go:502] enable addons completed in 1.839146517s: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0920 10:48:55.632175    8359 pod_ready.go:102] pod "coredns-5644d7b6d9-p222s" in "kube-system" namespace has status "Ready":"False"
	I0920 10:48:57.629128    8359 pod_ready.go:92] pod "coredns-5644d7b6d9-p222s" in "kube-system" namespace has status "Ready":"True"
	I0920 10:48:57.629141    8359 pod_ready.go:81] duration metric: took 4.019683793s waiting for pod "coredns-5644d7b6d9-p222s" in "kube-system" namespace to be "Ready" ...
	I0920 10:48:57.629148    8359 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-86l4z" in "kube-system" namespace to be "Ready" ...
	I0920 10:48:57.632240    8359 pod_ready.go:92] pod "kube-proxy-86l4z" in "kube-system" namespace has status "Ready":"True"
	I0920 10:48:57.632251    8359 pod_ready.go:81] duration metric: took 3.098435ms waiting for pod "kube-proxy-86l4z" in "kube-system" namespace to be "Ready" ...
	I0920 10:48:57.632256    8359 pod_ready.go:38] duration metric: took 4.03332952s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 10:48:57.632267    8359 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:48:57.632331    8359 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:48:57.645242    8359 api_server.go:72] duration metric: took 4.368361616s to wait for apiserver process to appear ...
	I0920 10:48:57.645254    8359 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:48:57.645265    8359 api_server.go:253] Checking apiserver healthz at https://192.168.64.40:8443/healthz ...
	I0920 10:48:57.650118    8359 api_server.go:279] https://192.168.64.40:8443/healthz returned 200:
	ok
	I0920 10:48:59.505980    8649 pod_ready.go:92] pod "etcd-embed-certs-564000" in "kube-system" namespace has status "Ready":"True"
	I0920 10:48:59.505992    8649 pod_ready.go:81] duration metric: took 2.010387269s waiting for pod "etcd-embed-certs-564000" in "kube-system" namespace to be "Ready" ...
	I0920 10:48:59.505998    8649 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-564000" in "kube-system" namespace to be "Ready" ...
	I0920 10:48:59.509604    8649 pod_ready.go:92] pod "kube-apiserver-embed-certs-564000" in "kube-system" namespace has status "Ready":"True"
	I0920 10:48:59.509613    8649 pod_ready.go:81] duration metric: took 3.61027ms waiting for pod "kube-apiserver-embed-certs-564000" in "kube-system" namespace to be "Ready" ...
	I0920 10:48:59.509621    8649 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-564000" in "kube-system" namespace to be "Ready" ...
	I0920 10:48:59.513175    8649 pod_ready.go:92] pod "kube-controller-manager-embed-certs-564000" in "kube-system" namespace has status "Ready":"True"
	I0920 10:48:59.513185    8649 pod_ready.go:81] duration metric: took 3.557767ms waiting for pod "kube-controller-manager-embed-certs-564000" in "kube-system" namespace to be "Ready" ...
	I0920 10:48:59.513192    8649 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-766r9" in "kube-system" namespace to be "Ready" ...
	I0920 10:48:59.792708    8649 pod_ready.go:92] pod "kube-proxy-766r9" in "kube-system" namespace has status "Ready":"True"
	I0920 10:48:59.792721    8649 pod_ready.go:81] duration metric: took 279.519006ms waiting for pod "kube-proxy-766r9" in "kube-system" namespace to be "Ready" ...
	I0920 10:48:59.792727    8649 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-564000" in "kube-system" namespace to be "Ready" ...
	I0920 10:49:00.591388    8649 pod_ready.go:92] pod "kube-scheduler-embed-certs-564000" in "kube-system" namespace has status "Ready":"True"
	I0920 10:49:00.591399    8649 pod_ready.go:81] duration metric: took 798.651706ms waiting for pod "kube-scheduler-embed-certs-564000" in "kube-system" namespace to be "Ready" ...
	I0920 10:49:00.591406    8649 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace to be "Ready" ...
	I0920 10:48:57.650837    8359 api_server.go:141] control plane version: v1.16.0
	I0920 10:48:57.671037    8359 api_server.go:131] duration metric: took 25.771796ms to wait for apiserver health ...
	I0920 10:48:57.671047    8359 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 10:48:57.673550    8359 system_pods.go:59] 4 kube-system pods found
	I0920 10:48:57.673563    8359 system_pods.go:61] "coredns-5644d7b6d9-p222s" [a259debe-4ec3-4794-a091-e94b68b03e4f] Running
	I0920 10:48:57.673566    8359 system_pods.go:61] "kube-proxy-86l4z" [ee91fdfc-8ada-47b8-b59b-59ac346e1ca7] Running
	I0920 10:48:57.673572    8359 system_pods.go:61] "metrics-server-74d5856cc6-z4jsq" [4a7ebeba-14d5-4aa8-80cb-47c81df2a810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 10:48:57.673576    8359 system_pods.go:61] "storage-provisioner" [056b2ca8-7047-4827-ad97-5245155bbd49] Running
	I0920 10:48:57.673580    8359 system_pods.go:74] duration metric: took 2.528962ms to wait for pod list to return data ...
	I0920 10:48:57.673584    8359 default_sa.go:34] waiting for default service account to be created ...
	I0920 10:48:57.675009    8359 default_sa.go:45] found service account: "default"
	I0920 10:48:57.675018    8359 default_sa.go:55] duration metric: took 1.430061ms for default service account to be created ...
	I0920 10:48:57.675022    8359 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 10:48:57.677058    8359 system_pods.go:86] 4 kube-system pods found
	I0920 10:48:57.677072    8359 system_pods.go:89] "coredns-5644d7b6d9-p222s" [a259debe-4ec3-4794-a091-e94b68b03e4f] Running
	I0920 10:48:57.677078    8359 system_pods.go:89] "kube-proxy-86l4z" [ee91fdfc-8ada-47b8-b59b-59ac346e1ca7] Running
	I0920 10:48:57.677087    8359 system_pods.go:89] "metrics-server-74d5856cc6-z4jsq" [4a7ebeba-14d5-4aa8-80cb-47c81df2a810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 10:48:57.677097    8359 system_pods.go:89] "storage-provisioner" [056b2ca8-7047-4827-ad97-5245155bbd49] Running
	I0920 10:48:57.677122    8359 retry.go:31] will retry after 237.777994ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0920 10:48:57.918195    8359 system_pods.go:86] 4 kube-system pods found
	I0920 10:48:57.918213    8359 system_pods.go:89] "coredns-5644d7b6d9-p222s" [a259debe-4ec3-4794-a091-e94b68b03e4f] Running
	I0920 10:48:57.918218    8359 system_pods.go:89] "kube-proxy-86l4z" [ee91fdfc-8ada-47b8-b59b-59ac346e1ca7] Running
	I0920 10:48:57.918224    8359 system_pods.go:89] "metrics-server-74d5856cc6-z4jsq" [4a7ebeba-14d5-4aa8-80cb-47c81df2a810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 10:48:57.918232    8359 system_pods.go:89] "storage-provisioner" [056b2ca8-7047-4827-ad97-5245155bbd49] Running
	I0920 10:48:57.918244    8359 retry.go:31] will retry after 337.08427ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0920 10:48:58.258246    8359 system_pods.go:86] 4 kube-system pods found
	I0920 10:48:58.258260    8359 system_pods.go:89] "coredns-5644d7b6d9-p222s" [a259debe-4ec3-4794-a091-e94b68b03e4f] Running
	I0920 10:48:58.258266    8359 system_pods.go:89] "kube-proxy-86l4z" [ee91fdfc-8ada-47b8-b59b-59ac346e1ca7] Running
	I0920 10:48:58.258274    8359 system_pods.go:89] "metrics-server-74d5856cc6-z4jsq" [4a7ebeba-14d5-4aa8-80cb-47c81df2a810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 10:48:58.258281    8359 system_pods.go:89] "storage-provisioner" [056b2ca8-7047-4827-ad97-5245155bbd49] Running
	I0920 10:48:58.258290    8359 retry.go:31] will retry after 421.216792ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0920 10:48:58.683528    8359 system_pods.go:86] 4 kube-system pods found
	I0920 10:48:58.683542    8359 system_pods.go:89] "coredns-5644d7b6d9-p222s" [a259debe-4ec3-4794-a091-e94b68b03e4f] Running
	I0920 10:48:58.683546    8359 system_pods.go:89] "kube-proxy-86l4z" [ee91fdfc-8ada-47b8-b59b-59ac346e1ca7] Running
	I0920 10:48:58.683553    8359 system_pods.go:89] "metrics-server-74d5856cc6-z4jsq" [4a7ebeba-14d5-4aa8-80cb-47c81df2a810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 10:48:58.683561    8359 system_pods.go:89] "storage-provisioner" [056b2ca8-7047-4827-ad97-5245155bbd49] Running
	I0920 10:48:58.683571    8359 retry.go:31] will retry after 478.616711ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0920 10:48:59.165360    8359 system_pods.go:86] 4 kube-system pods found
	I0920 10:48:59.165375    8359 system_pods.go:89] "coredns-5644d7b6d9-p222s" [a259debe-4ec3-4794-a091-e94b68b03e4f] Running
	I0920 10:48:59.165379    8359 system_pods.go:89] "kube-proxy-86l4z" [ee91fdfc-8ada-47b8-b59b-59ac346e1ca7] Running
	I0920 10:48:59.165384    8359 system_pods.go:89] "metrics-server-74d5856cc6-z4jsq" [4a7ebeba-14d5-4aa8-80cb-47c81df2a810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 10:48:59.165389    8359 system_pods.go:89] "storage-provisioner" [056b2ca8-7047-4827-ad97-5245155bbd49] Running
	I0920 10:48:59.165399    8359 retry.go:31] will retry after 650.501908ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0920 10:48:59.819619    8359 system_pods.go:86] 4 kube-system pods found
	I0920 10:48:59.819632    8359 system_pods.go:89] "coredns-5644d7b6d9-p222s" [a259debe-4ec3-4794-a091-e94b68b03e4f] Running
	I0920 10:48:59.819636    8359 system_pods.go:89] "kube-proxy-86l4z" [ee91fdfc-8ada-47b8-b59b-59ac346e1ca7] Running
	I0920 10:48:59.819644    8359 system_pods.go:89] "metrics-server-74d5856cc6-z4jsq" [4a7ebeba-14d5-4aa8-80cb-47c81df2a810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 10:48:59.819649    8359 system_pods.go:89] "storage-provisioner" [056b2ca8-7047-4827-ad97-5245155bbd49] Running
	I0920 10:48:59.819659    8359 retry.go:31] will retry after 630.245322ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0920 10:49:00.453371    8359 system_pods.go:86] 4 kube-system pods found
	I0920 10:49:00.453384    8359 system_pods.go:89] "coredns-5644d7b6d9-p222s" [a259debe-4ec3-4794-a091-e94b68b03e4f] Running
	I0920 10:49:00.453389    8359 system_pods.go:89] "kube-proxy-86l4z" [ee91fdfc-8ada-47b8-b59b-59ac346e1ca7] Running
	I0920 10:49:00.453396    8359 system_pods.go:89] "metrics-server-74d5856cc6-z4jsq" [4a7ebeba-14d5-4aa8-80cb-47c81df2a810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 10:49:00.453401    8359 system_pods.go:89] "storage-provisioner" [056b2ca8-7047-4827-ad97-5245155bbd49] Running
	I0920 10:49:00.453412    8359 retry.go:31] will retry after 1.067322607s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0920 10:49:01.524163    8359 system_pods.go:86] 4 kube-system pods found
	I0920 10:49:01.524176    8359 system_pods.go:89] "coredns-5644d7b6d9-p222s" [a259debe-4ec3-4794-a091-e94b68b03e4f] Running
	I0920 10:49:01.524181    8359 system_pods.go:89] "kube-proxy-86l4z" [ee91fdfc-8ada-47b8-b59b-59ac346e1ca7] Running
	I0920 10:49:01.524186    8359 system_pods.go:89] "metrics-server-74d5856cc6-z4jsq" [4a7ebeba-14d5-4aa8-80cb-47c81df2a810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 10:49:01.524195    8359 system_pods.go:89] "storage-provisioner" [056b2ca8-7047-4827-ad97-5245155bbd49] Running
	I0920 10:49:01.524205    8359 retry.go:31] will retry after 984.748184ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0920 10:49:02.511977    8359 system_pods.go:86] 4 kube-system pods found
	I0920 10:49:02.511991    8359 system_pods.go:89] "coredns-5644d7b6d9-p222s" [a259debe-4ec3-4794-a091-e94b68b03e4f] Running
	I0920 10:49:02.511997    8359 system_pods.go:89] "kube-proxy-86l4z" [ee91fdfc-8ada-47b8-b59b-59ac346e1ca7] Running
	I0920 10:49:02.512005    8359 system_pods.go:89] "metrics-server-74d5856cc6-z4jsq" [4a7ebeba-14d5-4aa8-80cb-47c81df2a810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 10:49:02.512010    8359 system_pods.go:89] "storage-provisioner" [056b2ca8-7047-4827-ad97-5245155bbd49] Running
	I0920 10:49:02.512020    8359 retry.go:31] will retry after 1.737395282s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0920 10:49:02.896309    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:49:05.396875    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:49:04.253086    8359 system_pods.go:86] 4 kube-system pods found
	I0920 10:49:04.253100    8359 system_pods.go:89] "coredns-5644d7b6d9-p222s" [a259debe-4ec3-4794-a091-e94b68b03e4f] Running
	I0920 10:49:04.253106    8359 system_pods.go:89] "kube-proxy-86l4z" [ee91fdfc-8ada-47b8-b59b-59ac346e1ca7] Running
	I0920 10:49:04.253112    8359 system_pods.go:89] "metrics-server-74d5856cc6-z4jsq" [4a7ebeba-14d5-4aa8-80cb-47c81df2a810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 10:49:04.253121    8359 system_pods.go:89] "storage-provisioner" [056b2ca8-7047-4827-ad97-5245155bbd49] Running
	I0920 10:49:04.253135    8359 retry.go:31] will retry after 1.779469743s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0920 10:49:06.035505    8359 system_pods.go:86] 4 kube-system pods found
	I0920 10:49:06.035521    8359 system_pods.go:89] "coredns-5644d7b6d9-p222s" [a259debe-4ec3-4794-a091-e94b68b03e4f] Running
	I0920 10:49:06.035525    8359 system_pods.go:89] "kube-proxy-86l4z" [ee91fdfc-8ada-47b8-b59b-59ac346e1ca7] Running
	I0920 10:49:06.035530    8359 system_pods.go:89] "metrics-server-74d5856cc6-z4jsq" [4a7ebeba-14d5-4aa8-80cb-47c81df2a810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 10:49:06.035536    8359 system_pods.go:89] "storage-provisioner" [056b2ca8-7047-4827-ad97-5245155bbd49] Running
	I0920 10:49:06.035546    8359 retry.go:31] will retry after 2.454632987s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0920 10:49:07.895895    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:49:10.396206    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:49:08.492825    8359 system_pods.go:86] 4 kube-system pods found
	I0920 10:49:08.492839    8359 system_pods.go:89] "coredns-5644d7b6d9-p222s" [a259debe-4ec3-4794-a091-e94b68b03e4f] Running
	I0920 10:49:08.492846    8359 system_pods.go:89] "kube-proxy-86l4z" [ee91fdfc-8ada-47b8-b59b-59ac346e1ca7] Running
	I0920 10:49:08.492852    8359 system_pods.go:89] "metrics-server-74d5856cc6-z4jsq" [4a7ebeba-14d5-4aa8-80cb-47c81df2a810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 10:49:08.492860    8359 system_pods.go:89] "storage-provisioner" [056b2ca8-7047-4827-ad97-5245155bbd49] Running
	I0920 10:49:08.492872    8359 retry.go:31] will retry after 3.176308566s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0920 10:49:11.673708    8359 system_pods.go:86] 4 kube-system pods found
	I0920 10:49:11.673721    8359 system_pods.go:89] "coredns-5644d7b6d9-p222s" [a259debe-4ec3-4794-a091-e94b68b03e4f] Running
	I0920 10:49:11.673725    8359 system_pods.go:89] "kube-proxy-86l4z" [ee91fdfc-8ada-47b8-b59b-59ac346e1ca7] Running
	I0920 10:49:11.673730    8359 system_pods.go:89] "metrics-server-74d5856cc6-z4jsq" [4a7ebeba-14d5-4aa8-80cb-47c81df2a810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 10:49:11.673735    8359 system_pods.go:89] "storage-provisioner" [056b2ca8-7047-4827-ad97-5245155bbd49] Running
	I0920 10:49:11.673745    8359 retry.go:31] will retry after 3.49159565s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0920 10:49:12.895243    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:49:15.394783    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:49:17.396472    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:49:15.168635    8359 system_pods.go:86] 4 kube-system pods found
	I0920 10:49:15.168649    8359 system_pods.go:89] "coredns-5644d7b6d9-p222s" [a259debe-4ec3-4794-a091-e94b68b03e4f] Running
	I0920 10:49:15.168653    8359 system_pods.go:89] "kube-proxy-86l4z" [ee91fdfc-8ada-47b8-b59b-59ac346e1ca7] Running
	I0920 10:49:15.168660    8359 system_pods.go:89] "metrics-server-74d5856cc6-z4jsq" [4a7ebeba-14d5-4aa8-80cb-47c81df2a810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 10:49:15.168666    8359 system_pods.go:89] "storage-provisioner" [056b2ca8-7047-4827-ad97-5245155bbd49] Running
	I0920 10:49:15.168676    8359 retry.go:31] will retry after 4.138543108s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0920 10:49:19.898427    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:49:22.396565    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:49:19.311980    8359 system_pods.go:86] 4 kube-system pods found
	I0920 10:49:19.311998    8359 system_pods.go:89] "coredns-5644d7b6d9-p222s" [a259debe-4ec3-4794-a091-e94b68b03e4f] Running
	I0920 10:49:19.312004    8359 system_pods.go:89] "kube-proxy-86l4z" [ee91fdfc-8ada-47b8-b59b-59ac346e1ca7] Running
	I0920 10:49:19.312026    8359 system_pods.go:89] "metrics-server-74d5856cc6-z4jsq" [4a7ebeba-14d5-4aa8-80cb-47c81df2a810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 10:49:19.312031    8359 system_pods.go:89] "storage-provisioner" [056b2ca8-7047-4827-ad97-5245155bbd49] Running
	I0920 10:49:19.312041    8359 retry.go:31] will retry after 6.159353982s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0920 10:49:24.897455    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:49:27.397973    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:49:25.474924    8359 system_pods.go:86] 4 kube-system pods found
	I0920 10:49:25.474939    8359 system_pods.go:89] "coredns-5644d7b6d9-p222s" [a259debe-4ec3-4794-a091-e94b68b03e4f] Running
	I0920 10:49:25.474943    8359 system_pods.go:89] "kube-proxy-86l4z" [ee91fdfc-8ada-47b8-b59b-59ac346e1ca7] Running
	I0920 10:49:25.474948    8359 system_pods.go:89] "metrics-server-74d5856cc6-z4jsq" [4a7ebeba-14d5-4aa8-80cb-47c81df2a810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 10:49:25.474960    8359 system_pods.go:89] "storage-provisioner" [056b2ca8-7047-4827-ad97-5245155bbd49] Running
	I0920 10:49:25.474970    8359 retry.go:31] will retry after 7.657656805s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0920 10:49:29.895928    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:49:31.896538    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:49:33.897981    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:49:36.397273    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:49:33.137348    8359 system_pods.go:86] 4 kube-system pods found
	I0920 10:49:33.137361    8359 system_pods.go:89] "coredns-5644d7b6d9-p222s" [a259debe-4ec3-4794-a091-e94b68b03e4f] Running
	I0920 10:49:33.137366    8359 system_pods.go:89] "kube-proxy-86l4z" [ee91fdfc-8ada-47b8-b59b-59ac346e1ca7] Running
	I0920 10:49:33.137371    8359 system_pods.go:89] "metrics-server-74d5856cc6-z4jsq" [4a7ebeba-14d5-4aa8-80cb-47c81df2a810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 10:49:33.137385    8359 system_pods.go:89] "storage-provisioner" [056b2ca8-7047-4827-ad97-5245155bbd49] Running
	I0920 10:49:33.137396    8359 retry.go:31] will retry after 11.019063046s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0920 10:49:38.897341    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:49:41.396036    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:49:43.396662    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:49:45.898462    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:49:44.161830    8359 system_pods.go:86] 6 kube-system pods found
	I0920 10:49:44.161843    8359 system_pods.go:89] "coredns-5644d7b6d9-p222s" [a259debe-4ec3-4794-a091-e94b68b03e4f] Running
	I0920 10:49:44.161847    8359 system_pods.go:89] "etcd-old-k8s-version-770000" [c82a64d2-9504-4917-a6a3-d577070a3380] Pending
	I0920 10:49:44.161851    8359 system_pods.go:89] "kube-controller-manager-old-k8s-version-770000" [c445d465-a5a7-47ca-a465-3272b8066c01] Pending
	I0920 10:49:44.161854    8359 system_pods.go:89] "kube-proxy-86l4z" [ee91fdfc-8ada-47b8-b59b-59ac346e1ca7] Running
	I0920 10:49:44.161863    8359 system_pods.go:89] "metrics-server-74d5856cc6-z4jsq" [4a7ebeba-14d5-4aa8-80cb-47c81df2a810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 10:49:44.161869    8359 system_pods.go:89] "storage-provisioner" [056b2ca8-7047-4827-ad97-5245155bbd49] Running
	I0920 10:49:44.161878    8359 retry.go:31] will retry after 12.17930399s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0920 10:49:48.397096    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:49:50.397433    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:49:52.398316    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:49:54.897188    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:49:56.898655    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:49:56.347101    8359 system_pods.go:86] 8 kube-system pods found
	I0920 10:49:56.347134    8359 system_pods.go:89] "coredns-5644d7b6d9-p222s" [a259debe-4ec3-4794-a091-e94b68b03e4f] Running
	I0920 10:49:56.347149    8359 system_pods.go:89] "etcd-old-k8s-version-770000" [c82a64d2-9504-4917-a6a3-d577070a3380] Running
	I0920 10:49:56.347157    8359 system_pods.go:89] "kube-apiserver-old-k8s-version-770000" [c333f6f7-6add-47fd-a3ff-134258f5743b] Pending
	I0920 10:49:56.347164    8359 system_pods.go:89] "kube-controller-manager-old-k8s-version-770000" [c445d465-a5a7-47ca-a465-3272b8066c01] Running
	I0920 10:49:56.347169    8359 system_pods.go:89] "kube-proxy-86l4z" [ee91fdfc-8ada-47b8-b59b-59ac346e1ca7] Running
	I0920 10:49:56.347173    8359 system_pods.go:89] "kube-scheduler-old-k8s-version-770000" [afc06128-1481-4b64-b3a4-44575aea08be] Pending
	I0920 10:49:56.347185    8359 system_pods.go:89] "metrics-server-74d5856cc6-z4jsq" [4a7ebeba-14d5-4aa8-80cb-47c81df2a810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 10:49:56.347222    8359 system_pods.go:89] "storage-provisioner" [056b2ca8-7047-4827-ad97-5245155bbd49] Running
	I0920 10:49:56.347242    8359 retry.go:31] will retry after 10.827302011s: missing components: kube-apiserver, kube-scheduler
	I0920 10:49:59.397999    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:50:01.898362    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:50:07.179609    8359 system_pods.go:86] 8 kube-system pods found
	I0920 10:50:07.179623    8359 system_pods.go:89] "coredns-5644d7b6d9-p222s" [a259debe-4ec3-4794-a091-e94b68b03e4f] Running
	I0920 10:50:07.179627    8359 system_pods.go:89] "etcd-old-k8s-version-770000" [c82a64d2-9504-4917-a6a3-d577070a3380] Running
	I0920 10:50:07.179631    8359 system_pods.go:89] "kube-apiserver-old-k8s-version-770000" [c333f6f7-6add-47fd-a3ff-134258f5743b] Running
	I0920 10:50:07.179636    8359 system_pods.go:89] "kube-controller-manager-old-k8s-version-770000" [c445d465-a5a7-47ca-a465-3272b8066c01] Running
	I0920 10:50:07.179639    8359 system_pods.go:89] "kube-proxy-86l4z" [ee91fdfc-8ada-47b8-b59b-59ac346e1ca7] Running
	I0920 10:50:07.179643    8359 system_pods.go:89] "kube-scheduler-old-k8s-version-770000" [afc06128-1481-4b64-b3a4-44575aea08be] Running
	I0920 10:50:07.179651    8359 system_pods.go:89] "metrics-server-74d5856cc6-z4jsq" [4a7ebeba-14d5-4aa8-80cb-47c81df2a810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 10:50:07.179656    8359 system_pods.go:89] "storage-provisioner" [056b2ca8-7047-4827-ad97-5245155bbd49] Running
	I0920 10:50:07.179661    8359 system_pods.go:126] duration metric: took 1m9.503314775s to wait for k8s-apps to be running ...
	I0920 10:50:07.179665    8359 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 10:50:07.179717    8359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 10:50:07.189082    8359 system_svc.go:56] duration metric: took 9.407977ms WaitForService to wait for kubelet.
	I0920 10:50:07.189097    8359 kubeadm.go:581] duration metric: took 1m13.910899164s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0920 10:50:07.189109    8359 node_conditions.go:102] verifying NodePressure condition ...
	I0920 10:50:07.191636    8359 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0920 10:50:07.191649    8359 node_conditions.go:123] node cpu capacity is 2
	I0920 10:50:07.191656    8359 node_conditions.go:105] duration metric: took 2.543965ms to run NodePressure ...
	I0920 10:50:07.191665    8359 start.go:228] waiting for startup goroutines ...
	I0920 10:50:07.191670    8359 start.go:233] waiting for cluster config update ...
	I0920 10:50:07.191679    8359 start.go:242] writing updated cluster config ...
	I0920 10:50:07.191987    8359 ssh_runner.go:195] Run: rm -f paused
	I0920 10:50:07.229558    8359 start.go:600] kubectl: 1.27.2, cluster: 1.16.0 (minor skew: 11)
	I0920 10:50:07.252124    8359 out.go:177] 
	W0920 10:50:07.273430    8359 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.16.0.
	I0920 10:50:07.295364    8359 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0920 10:50:07.338338    8359 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-770000" cluster and "default" namespace by default
	I0920 10:50:04.398487    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:50:06.896250    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:50:08.897075    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:50:10.897172    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:50:13.397820    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	I0920 10:50:15.898825    8649 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zrzfp" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-09-20 17:42:08 UTC, ends at Wed 2023-09-20 17:50:18 UTC. --
	Sep 20 17:49:08 old-k8s-version-770000 dockerd[1178]: time="2023-09-20T17:49:08.710177524Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host"
	Sep 20 17:49:08 old-k8s-version-770000 dockerd[1178]: time="2023-09-20T17:49:08.710215844Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host"
	Sep 20 17:49:08 old-k8s-version-770000 dockerd[1178]: time="2023-09-20T17:49:08.711035419Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host"
	Sep 20 17:49:32 old-k8s-version-770000 dockerd[1184]: time="2023-09-20T17:49:32.754424483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 20 17:49:32 old-k8s-version-770000 dockerd[1184]: time="2023-09-20T17:49:32.755553140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 20 17:49:32 old-k8s-version-770000 dockerd[1184]: time="2023-09-20T17:49:32.755596972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 20 17:49:32 old-k8s-version-770000 dockerd[1184]: time="2023-09-20T17:49:32.755610264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 20 17:49:33 old-k8s-version-770000 dockerd[1178]: time="2023-09-20T17:49:33.043569668Z" level=info msg="ignoring event" container=a67f4e65eb74a80001848da1324d911efced7694ef766d24ed8ef469064f97c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:49:33 old-k8s-version-770000 dockerd[1184]: time="2023-09-20T17:49:33.044329458Z" level=info msg="shim disconnected" id=a67f4e65eb74a80001848da1324d911efced7694ef766d24ed8ef469064f97c1 namespace=moby
	Sep 20 17:49:33 old-k8s-version-770000 dockerd[1184]: time="2023-09-20T17:49:33.044436682Z" level=warning msg="cleaning up after shim disconnected" id=a67f4e65eb74a80001848da1324d911efced7694ef766d24ed8ef469064f97c1 namespace=moby
	Sep 20 17:49:33 old-k8s-version-770000 dockerd[1184]: time="2023-09-20T17:49:33.044553983Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 20 17:49:33 old-k8s-version-770000 dockerd[1178]: time="2023-09-20T17:49:33.718284944Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host"
	Sep 20 17:49:33 old-k8s-version-770000 dockerd[1178]: time="2023-09-20T17:49:33.718407939Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host"
	Sep 20 17:49:33 old-k8s-version-770000 dockerd[1178]: time="2023-09-20T17:49:33.719403051Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host"
	Sep 20 17:50:02 old-k8s-version-770000 dockerd[1184]: time="2023-09-20T17:50:02.741075244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 20 17:50:02 old-k8s-version-770000 dockerd[1184]: time="2023-09-20T17:50:02.741122951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 20 17:50:02 old-k8s-version-770000 dockerd[1184]: time="2023-09-20T17:50:02.741136849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 20 17:50:02 old-k8s-version-770000 dockerd[1184]: time="2023-09-20T17:50:02.741151336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 20 17:50:03 old-k8s-version-770000 dockerd[1178]: time="2023-09-20T17:50:03.018739195Z" level=info msg="ignoring event" container=80e2db860fd6fba21772eca536f98e32e3414e7893f0741d67e5ddff7941c867 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:50:03 old-k8s-version-770000 dockerd[1184]: time="2023-09-20T17:50:03.019740504Z" level=info msg="shim disconnected" id=80e2db860fd6fba21772eca536f98e32e3414e7893f0741d67e5ddff7941c867 namespace=moby
	Sep 20 17:50:03 old-k8s-version-770000 dockerd[1184]: time="2023-09-20T17:50:03.019987741Z" level=warning msg="cleaning up after shim disconnected" id=80e2db860fd6fba21772eca536f98e32e3414e7893f0741d67e5ddff7941c867 namespace=moby
	Sep 20 17:50:03 old-k8s-version-770000 dockerd[1184]: time="2023-09-20T17:50:03.020032708Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 20 17:50:14 old-k8s-version-770000 dockerd[1178]: time="2023-09-20T17:50:14.709734983Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host"
	Sep 20 17:50:14 old-k8s-version-770000 dockerd[1178]: time="2023-09-20T17:50:14.709772844Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host"
	Sep 20 17:50:14 old-k8s-version-770000 dockerd[1178]: time="2023-09-20T17:50:14.710898990Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE                    COMMAND                  CREATED              STATUS                      PORTS     NAMES
	80e2db860fd6   a90209bb39e3             "nginx -g 'daemon of…"   16 seconds ago       Exited (1) 15 seconds ago             k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-d6b4b5544-svs5d_kubernetes-dashboard_981632d0-5769-49fc-b065-8c87495ddef2_3
	61984590d7ba   kubernetesui/dashboard   "/dashboard --insecu…"   About a minute ago   Up About a minute                     k8s_kubernetes-dashboard_kubernetes-dashboard-84b68f675b-5hlpg_kubernetes-dashboard_74afda2c-c60c-4b3a-af1b-50df907bedfa_0
	4a3678be3f6f   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_dashboard-metrics-scraper-d6b4b5544-svs5d_kubernetes-dashboard_981632d0-5769-49fc-b065-8c87495ddef2_0
	d29537e079b8   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kubernetes-dashboard-84b68f675b-5hlpg_kubernetes-dashboard_74afda2c-c60c-4b3a-af1b-50df907bedfa_0
	06e883a4f6f2   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_metrics-server-74d5856cc6-z4jsq_kube-system_4a7ebeba-14d5-4aa8-80cb-47c81df2a810_0
	85248483210d   6e38f40d628d             "/storage-provisioner"   About a minute ago   Up About a minute                     k8s_storage-provisioner_storage-provisioner_kube-system_056b2ca8-7047-4827-ad97-5245155bbd49_0
	f564946ea337   bf261d157914             "/coredns -conf /etc…"   About a minute ago   Up About a minute                     k8s_coredns_coredns-5644d7b6d9-p222s_kube-system_a259debe-4ec3-4794-a091-e94b68b03e4f_0
	3f27e7bd3441   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_storage-provisioner_kube-system_056b2ca8-7047-4827-ad97-5245155bbd49_0
	fc36eb181316   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_coredns-5644d7b6d9-p222s_kube-system_a259debe-4ec3-4794-a091-e94b68b03e4f_0
	81dccb59f97e   c21b0c7400f9             "/usr/local/bin/kube…"   About a minute ago   Up About a minute                     k8s_kube-proxy_kube-proxy-86l4z_kube-system_ee91fdfc-8ada-47b8-b59b-59ac346e1ca7_0
	f7aa748789e8   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-proxy-86l4z_kube-system_ee91fdfc-8ada-47b8-b59b-59ac346e1ca7_0
	85eeb807e0e5   b2756210eeab             "etcd --advertise-cl…"   About a minute ago   Up About a minute                     k8s_etcd_etcd-old-k8s-version-770000_kube-system_7eafda632867c4a0113ab9054f5b1c48_0
	f154a2a002fc   301ddc62b80b             "kube-scheduler --au…"   About a minute ago   Up About a minute                     k8s_kube-scheduler_kube-scheduler-old-k8s-version-770000_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	80c32625d039   06a629a7e51c             "kube-controller-man…"   About a minute ago   Up About a minute                     k8s_kube-controller-manager_kube-controller-manager-old-k8s-version-770000_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	5af43df04e2b   b305571ca60a             "kube-apiserver --ad…"   About a minute ago   Up About a minute                     k8s_kube-apiserver_kube-apiserver-old-k8s-version-770000_kube-system_7ab33bae85b14572b4945e1e94beb673_0
	76d165537fff   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_etcd-old-k8s-version-770000_kube-system_7eafda632867c4a0113ab9054f5b1c48_0
	be795dd40838   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-scheduler-old-k8s-version-770000_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	48d400145695   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-controller-manager-old-k8s-version-770000_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	754f59dec47f   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-apiserver-old-k8s-version-770000_kube-system_7ab33bae85b14572b4945e1e94beb673_0
	time="2023-09-20T17:50:18Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	
	* 
	* ==> coredns [f564946ea337] <==
	* .:53
	2023-09-20T17:48:55.388Z [INFO] plugin/reload: Running configuration MD5 = 46cbc15810136842e5653e578aaacfef
	2023-09-20T17:48:55.389Z [INFO] CoreDNS-1.6.2
	2023-09-20T17:48:55.389Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-09-20T17:48:55.392Z [INFO] 127.0.0.1:52488 - 24703 "HINFO IN 3455458319925550537.767183842917157952. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.003483533s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-770000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-770000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=df65c8f75cd36776317ad1fc65dba4e994a4b8ca
	                    minikube.k8s.io/name=old-k8s-version-770000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_20T10_48_37_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 20 Sep 2023 17:48:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 20 Sep 2023 17:49:32 +0000   Wed, 20 Sep 2023 17:48:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 20 Sep 2023 17:49:32 +0000   Wed, 20 Sep 2023 17:48:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 20 Sep 2023 17:49:32 +0000   Wed, 20 Sep 2023 17:48:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 20 Sep 2023 17:49:32 +0000   Wed, 20 Sep 2023 17:48:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.64.40
	  Hostname:    old-k8s-version-770000
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2166052Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2166052Ki
	 pods:               110
	System Info:
	 Machine ID:                 3dad1bbc9f9144f5aa5142fb6aa7fb0c
	 System UUID:                97e911ee-0000-0000-95c2-149d997fca88
	 Boot ID:                    ecdabff9-5d5a-4a94-b504-118571162f7b
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://24.0.6
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-p222s                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     86s
	  kube-system                etcd-old-k8s-version-770000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kube-system                kube-apiserver-old-k8s-version-770000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                kube-controller-manager-old-k8s-version-770000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                kube-proxy-86l4z                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                kube-scheduler-old-k8s-version-770000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                metrics-server-74d5856cc6-z4jsq                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         83s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kubernetes-dashboard       dashboard-metrics-scraper-d6b4b5544-svs5d         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kubernetes-dashboard       kubernetes-dashboard-84b68f675b-5hlpg             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                Message
	  ----    ------                   ----                 ----                                -------
	  Normal  NodeHasSufficientMemory  111s (x8 over 111s)  kubelet, old-k8s-version-770000     Node old-k8s-version-770000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s (x8 over 111s)  kubelet, old-k8s-version-770000     Node old-k8s-version-770000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s (x7 over 111s)  kubelet, old-k8s-version-770000     Node old-k8s-version-770000 status is now: NodeHasSufficientPID
	  Normal  Starting                 85s                  kube-proxy, old-k8s-version-770000  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.028760] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +4.985645] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007398] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.297390] systemd-fstab-generator[125]: Ignoring "noauto" for root device
	[  +0.040857] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.913033] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +4.177823] systemd-fstab-generator[524]: Ignoring "noauto" for root device
	[  +0.090623] systemd-fstab-generator[535]: Ignoring "noauto" for root device
	[  +0.749742] systemd-fstab-generator[789]: Ignoring "noauto" for root device
	[  +0.218920] systemd-fstab-generator[828]: Ignoring "noauto" for root device
	[  +0.100971] systemd-fstab-generator[839]: Ignoring "noauto" for root device
	[  +0.118008] systemd-fstab-generator[858]: Ignoring "noauto" for root device
	[  +6.129412] systemd-fstab-generator[1169]: Ignoring "noauto" for root device
	[  +1.754656] kauditd_printk_skb: 67 callbacks suppressed
	[ +14.311000] systemd-fstab-generator[1625]: Ignoring "noauto" for root device
	[ +20.692042] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.075765] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep20 17:43] kauditd_printk_skb: 7 callbacks suppressed
	[Sep20 17:48] systemd-fstab-generator[7035]: Ignoring "noauto" for root device
	[Sep20 17:49] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +0.016857] kauditd_printk_skb: 6 callbacks suppressed
	
	* 
	* ==> etcd [85eeb807e0e5] <==
	* 2023-09-20 17:48:29.002582 I | raft: 8700b5f30fd8925d became follower at term 0
	2023-09-20 17:48:29.002603 I | raft: newRaft 8700b5f30fd8925d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-09-20 17:48:29.002612 I | raft: 8700b5f30fd8925d became follower at term 1
	2023-09-20 17:48:29.005310 W | auth: simple token is not cryptographically signed
	2023-09-20 17:48:29.097130 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-09-20 17:48:29.097943 I | etcdserver: 8700b5f30fd8925d as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-09-20 17:48:29.098290 I | etcdserver/membership: added member 8700b5f30fd8925d [https://192.168.64.40:2380] to cluster 39244c9d1c1d508b
	2023-09-20 17:48:29.100287 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-20 17:48:29.100563 I | embed: listening for metrics on http://192.168.64.40:2381
	2023-09-20 17:48:29.100728 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-20 17:48:29.510553 I | raft: 8700b5f30fd8925d is starting a new election at term 1
	2023-09-20 17:48:29.510595 I | raft: 8700b5f30fd8925d became candidate at term 2
	2023-09-20 17:48:29.510605 I | raft: 8700b5f30fd8925d received MsgVoteResp from 8700b5f30fd8925d at term 2
	2023-09-20 17:48:29.510612 I | raft: 8700b5f30fd8925d became leader at term 2
	2023-09-20 17:48:29.510617 I | raft: raft.node: 8700b5f30fd8925d elected leader 8700b5f30fd8925d at term 2
	2023-09-20 17:48:29.519741 I | etcdserver: published {Name:old-k8s-version-770000 ClientURLs:[https://192.168.64.40:2379]} to cluster 39244c9d1c1d508b
	2023-09-20 17:48:29.538108 I | embed: ready to serve client requests
	2023-09-20 17:48:29.538957 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-20 17:48:29.539043 I | etcdserver: setting up the initial cluster version to 3.3
	2023-09-20 17:48:29.539368 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-09-20 17:48:29.539492 I | etcdserver/api: enabled capabilities for version 3.3
	2023-09-20 17:48:29.539548 I | embed: ready to serve client requests
	2023-09-20 17:48:29.540220 I | embed: serving client requests on 192.168.64.40:2379
	2023-09-20 17:48:53.686001 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-770000\" " with result "range_response_count:1 size:2997" took too long (162.84947ms) to execute
	2023-09-20 17:48:53.692284 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-770000\" " with result "range_response_count:1 size:2997" took too long (148.349527ms) to execute
	
	* 
	* ==> kernel <==
	*  17:50:19 up 8 min,  0 users,  load average: 0.44, 0.33, 0.16
	Linux old-k8s-version-770000 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [5af43df04e2b] <==
	* I0920 17:48:33.234812       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0920 17:48:33.234820       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0920 17:48:33.240087       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I0920 17:48:33.246450       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I0920 17:48:33.246480       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0920 17:48:35.016594       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 17:48:35.295792       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0920 17:48:35.652288       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.64.40]
	I0920 17:48:35.652813       1 controller.go:606] quota admission added evaluator for: endpoints
	I0920 17:48:36.511378       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0920 17:48:37.138443       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0920 17:48:37.441356       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0920 17:48:52.793053       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0920 17:48:52.815851       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I0920 17:48:53.151658       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0920 17:48:55.780799       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0920 17:48:55.780860       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 17:48:55.780934       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0920 17:48:55.780962       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 17:49:55.781276       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0920 17:49:55.781373       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 17:49:55.781432       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0920 17:49:55.781443       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [80c32625d039] <==
	* E0920 17:48:54.694919       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0920 17:48:54.701406       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0920 17:48:54.715621       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0920 17:48:54.715951       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0920 17:48:54.716165       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"a9d9b6f7-b7a8-4143-bd26-dd205bac028d", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0920 17:48:54.716178       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"7ad5c741-f77a-40f6-8786-0484aab48bb3", APIVersion:"apps/v1", ResourceVersion:"423", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0920 17:48:54.727365       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0920 17:48:54.727545       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"7ad5c741-f77a-40f6-8786-0484aab48bb3", APIVersion:"apps/v1", ResourceVersion:"423", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0920 17:48:54.736730       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0920 17:48:54.736919       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"a9d9b6f7-b7a8-4143-bd26-dd205bac028d", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0920 17:48:54.739156       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0920 17:48:54.739207       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"7ad5c741-f77a-40f6-8786-0484aab48bb3", APIVersion:"apps/v1", ResourceVersion:"423", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0920 17:48:54.764805       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0920 17:48:54.764876       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0920 17:48:54.764910       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"7ad5c741-f77a-40f6-8786-0484aab48bb3", APIVersion:"apps/v1", ResourceVersion:"423", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0920 17:48:54.764927       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"a9d9b6f7-b7a8-4143-bd26-dd205bac028d", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0920 17:48:54.793291       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0920 17:48:54.793340       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"a9d9b6f7-b7a8-4143-bd26-dd205bac028d", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0920 17:48:55.469113       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-74d5856cc6", UID:"8289008e-4d21-4273-bf40-4714bff39c40", APIVersion:"apps/v1", ResourceVersion:"376", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-74d5856cc6-z4jsq
	I0920 17:48:55.833555       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"a9d9b6f7-b7a8-4143-bd26-dd205bac028d", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-84b68f675b-5hlpg
	I0920 17:48:55.859832       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"7ad5c741-f77a-40f6-8786-0484aab48bb3", APIVersion:"apps/v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-d6b4b5544-svs5d
	E0920 17:49:23.593850       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0920 17:49:25.344142       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 17:49:53.845641       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0920 17:49:57.346611       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [81dccb59f97e] <==
	* W0920 17:48:53.958978       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0920 17:48:53.965679       1 node.go:135] Successfully retrieved node IP: 192.168.64.40
	I0920 17:48:53.965698       1 server_others.go:149] Using iptables Proxier.
	I0920 17:48:53.965905       1 server.go:529] Version: v1.16.0
	I0920 17:48:53.967173       1 config.go:313] Starting service config controller
	I0920 17:48:53.967195       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0920 17:48:53.967440       1 config.go:131] Starting endpoints config controller
	I0920 17:48:53.967449       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0920 17:48:54.069199       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0920 17:48:54.069214       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [f154a2a002fc] <==
	* I0920 17:48:32.306633       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0920 17:48:32.345781       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 17:48:32.346241       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 17:48:32.347196       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 17:48:32.347222       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 17:48:32.347242       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 17:48:32.348172       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 17:48:32.348196       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 17:48:32.348215       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 17:48:32.348232       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 17:48:32.348248       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 17:48:32.349778       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 17:48:33.346801       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 17:48:33.349882       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 17:48:33.350903       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 17:48:33.352425       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 17:48:33.353364       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 17:48:33.356189       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 17:48:33.357957       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 17:48:33.359107       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 17:48:33.360075       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 17:48:33.361736       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 17:48:33.362426       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 17:48:52.830825       1 factory.go:585] pod is already present in the activeQ
	E0920 17:48:52.880986       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-09-20 17:42:08 UTC, ends at Wed 2023-09-20 17:50:19 UTC. --
	Sep 20 17:49:09 old-k8s-version-770000 kubelet[7041]: E0920 17:49:09.285281    7041 pod_workers.go:191] Error syncing pod 981632d0-5769-49fc-b065-8c87495ddef2 ("dashboard-metrics-scraper-d6b4b5544-svs5d_kubernetes-dashboard(981632d0-5769-49fc-b065-8c87495ddef2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-svs5d_kubernetes-dashboard(981632d0-5769-49fc-b065-8c87495ddef2)"
	Sep 20 17:49:10 old-k8s-version-770000 kubelet[7041]: W0920 17:49:10.289153    7041 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-svs5d through plugin: invalid network status for
	Sep 20 17:49:10 old-k8s-version-770000 kubelet[7041]: E0920 17:49:10.292227    7041 pod_workers.go:191] Error syncing pod 981632d0-5769-49fc-b065-8c87495ddef2 ("dashboard-metrics-scraper-d6b4b5544-svs5d_kubernetes-dashboard(981632d0-5769-49fc-b065-8c87495ddef2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-svs5d_kubernetes-dashboard(981632d0-5769-49fc-b065-8c87495ddef2)"
	Sep 20 17:49:17 old-k8s-version-770000 kubelet[7041]: E0920 17:49:17.590998    7041 pod_workers.go:191] Error syncing pod 981632d0-5769-49fc-b065-8c87495ddef2 ("dashboard-metrics-scraper-d6b4b5544-svs5d_kubernetes-dashboard(981632d0-5769-49fc-b065-8c87495ddef2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-svs5d_kubernetes-dashboard(981632d0-5769-49fc-b065-8c87495ddef2)"
	Sep 20 17:49:20 old-k8s-version-770000 kubelet[7041]: E0920 17:49:20.704509    7041 pod_workers.go:191] Error syncing pod 4a7ebeba-14d5-4aa8-80cb-47c81df2a810 ("metrics-server-74d5856cc6-z4jsq_kube-system(4a7ebeba-14d5-4aa8-80cb-47c81df2a810)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 17:49:33 old-k8s-version-770000 kubelet[7041]: W0920 17:49:33.429165    7041 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-svs5d through plugin: invalid network status for
	Sep 20 17:49:33 old-k8s-version-770000 kubelet[7041]: E0920 17:49:33.433802    7041 pod_workers.go:191] Error syncing pod 981632d0-5769-49fc-b065-8c87495ddef2 ("dashboard-metrics-scraper-d6b4b5544-svs5d_kubernetes-dashboard(981632d0-5769-49fc-b065-8c87495ddef2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-svs5d_kubernetes-dashboard(981632d0-5769-49fc-b065-8c87495ddef2)"
	Sep 20 17:49:33 old-k8s-version-770000 kubelet[7041]: E0920 17:49:33.719837    7041 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host
	Sep 20 17:49:33 old-k8s-version-770000 kubelet[7041]: E0920 17:49:33.719877    7041 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host
	Sep 20 17:49:33 old-k8s-version-770000 kubelet[7041]: E0920 17:49:33.719936    7041 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host
	Sep 20 17:49:33 old-k8s-version-770000 kubelet[7041]: E0920 17:49:33.719976    7041 pod_workers.go:191] Error syncing pod 4a7ebeba-14d5-4aa8-80cb-47c81df2a810 ("metrics-server-74d5856cc6-z4jsq_kube-system(4a7ebeba-14d5-4aa8-80cb-47c81df2a810)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host"
	Sep 20 17:49:34 old-k8s-version-770000 kubelet[7041]: W0920 17:49:34.441098    7041 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-svs5d through plugin: invalid network status for
	Sep 20 17:49:37 old-k8s-version-770000 kubelet[7041]: E0920 17:49:37.591104    7041 pod_workers.go:191] Error syncing pod 981632d0-5769-49fc-b065-8c87495ddef2 ("dashboard-metrics-scraper-d6b4b5544-svs5d_kubernetes-dashboard(981632d0-5769-49fc-b065-8c87495ddef2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-svs5d_kubernetes-dashboard(981632d0-5769-49fc-b065-8c87495ddef2)"
	Sep 20 17:49:48 old-k8s-version-770000 kubelet[7041]: E0920 17:49:48.704681    7041 pod_workers.go:191] Error syncing pod 4a7ebeba-14d5-4aa8-80cb-47c81df2a810 ("metrics-server-74d5856cc6-z4jsq_kube-system(4a7ebeba-14d5-4aa8-80cb-47c81df2a810)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 17:49:50 old-k8s-version-770000 kubelet[7041]: E0920 17:49:50.702078    7041 pod_workers.go:191] Error syncing pod 981632d0-5769-49fc-b065-8c87495ddef2 ("dashboard-metrics-scraper-d6b4b5544-svs5d_kubernetes-dashboard(981632d0-5769-49fc-b065-8c87495ddef2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-svs5d_kubernetes-dashboard(981632d0-5769-49fc-b065-8c87495ddef2)"
	Sep 20 17:50:00 old-k8s-version-770000 kubelet[7041]: E0920 17:50:00.704016    7041 pod_workers.go:191] Error syncing pod 4a7ebeba-14d5-4aa8-80cb-47c81df2a810 ("metrics-server-74d5856cc6-z4jsq_kube-system(4a7ebeba-14d5-4aa8-80cb-47c81df2a810)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 17:50:03 old-k8s-version-770000 kubelet[7041]: W0920 17:50:03.041956    7041 container.go:409] Failed to create summary reader for "/kubepods/besteffort/pod981632d0-5769-49fc-b065-8c87495ddef2/80e2db860fd6fba21772eca536f98e32e3414e7893f0741d67e5ddff7941c867": none of the resources are being tracked.
	Sep 20 17:50:03 old-k8s-version-770000 kubelet[7041]: W0920 17:50:03.628002    7041 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-svs5d through plugin: invalid network status for
	Sep 20 17:50:03 old-k8s-version-770000 kubelet[7041]: E0920 17:50:03.631916    7041 pod_workers.go:191] Error syncing pod 981632d0-5769-49fc-b065-8c87495ddef2 ("dashboard-metrics-scraper-d6b4b5544-svs5d_kubernetes-dashboard(981632d0-5769-49fc-b065-8c87495ddef2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-svs5d_kubernetes-dashboard(981632d0-5769-49fc-b065-8c87495ddef2)"
	Sep 20 17:50:04 old-k8s-version-770000 kubelet[7041]: W0920 17:50:04.639013    7041 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-svs5d through plugin: invalid network status for
	Sep 20 17:50:07 old-k8s-version-770000 kubelet[7041]: E0920 17:50:07.592160    7041 pod_workers.go:191] Error syncing pod 981632d0-5769-49fc-b065-8c87495ddef2 ("dashboard-metrics-scraper-d6b4b5544-svs5d_kubernetes-dashboard(981632d0-5769-49fc-b065-8c87495ddef2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-svs5d_kubernetes-dashboard(981632d0-5769-49fc-b065-8c87495ddef2)"
	Sep 20 17:50:14 old-k8s-version-770000 kubelet[7041]: E0920 17:50:14.711276    7041 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host
	Sep 20 17:50:14 old-k8s-version-770000 kubelet[7041]: E0920 17:50:14.711317    7041 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host
	Sep 20 17:50:14 old-k8s-version-770000 kubelet[7041]: E0920 17:50:14.711347    7041 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host
	Sep 20 17:50:14 old-k8s-version-770000 kubelet[7041]: E0920 17:50:14.711365    7041 pod_workers.go:191] Error syncing pod 4a7ebeba-14d5-4aa8-80cb-47c81df2a810 ("metrics-server-74d5856cc6-z4jsq_kube-system(4a7ebeba-14d5-4aa8-80cb-47c81df2a810)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host"
	
	* 
	* ==> kubernetes-dashboard [61984590d7ba] <==
	* 2023/09/20 17:49:02 Starting overwatch
	2023/09/20 17:49:02 Using namespace: kubernetes-dashboard
	2023/09/20 17:49:02 Using in-cluster config to connect to apiserver
	2023/09/20 17:49:02 Using secret token for csrf signing
	2023/09/20 17:49:02 Initializing csrf token from kubernetes-dashboard-csrf secret
	2023/09/20 17:49:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2023/09/20 17:49:02 Successful initial request to the apiserver, version: v1.16.0
	2023/09/20 17:49:02 Generating JWE encryption key
	2023/09/20 17:49:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2023/09/20 17:49:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2023/09/20 17:49:02 Initializing JWE encryption key from synchronized object
	2023/09/20 17:49:02 Creating in-cluster Sidecar client
	2023/09/20 17:49:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/20 17:49:02 Serving insecurely on HTTP port: 9090
	2023/09/20 17:49:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/20 17:50:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [85248483210d] <==
	* I0920 17:48:55.275692       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 17:48:55.309190       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 17:48:55.309677       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 17:48:55.316218       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 17:48:55.316840       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5bb5cdf7-8b39-456b-92e0-6051e2a5f46a", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-770000_4f4a09c4-86eb-4c70-929d-a99ebdca3787 became leader
	I0920 17:48:55.316940       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-770000_4f4a09c4-86eb-4c70-929d-a99ebdca3787!
	I0920 17:48:55.417204       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-770000_4f4a09c4-86eb-4c70-929d-a99ebdca3787!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-770000 -n old-k8s-version-770000
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-770000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-z4jsq
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-770000 describe pod metrics-server-74d5856cc6-z4jsq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-770000 describe pod metrics-server-74d5856cc6-z4jsq: exit status 1 (49.848173ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-z4jsq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-770000 describe pod metrics-server-74d5856cc6-z4jsq: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.76s)

                                                
                                    

Test pass (283/307)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 15.15
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.3
10 TestDownloadOnly/v1.28.2/json-events 10.53
11 TestDownloadOnly/v1.28.2/preload-exists 0
14 TestDownloadOnly/v1.28.2/kubectl 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.3
16 TestDownloadOnly/DeleteAll 0.37
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.35
19 TestBinaryMirror 0.96
20 TestOffline 54.54
23 TestCertOptions 37.8
24 TestCertExpiration 250.52
25 TestDockerFlags 45.43
26 TestForceSystemdFlag 40.01
27 TestForceSystemdEnv 38.67
30 TestHyperKitDriverInstallOrUpdate 7.24
33 TestErrorSpam/setup 34
34 TestErrorSpam/start 1.43
35 TestErrorSpam/status 0.43
36 TestErrorSpam/pause 1.21
37 TestErrorSpam/unpause 1.27
38 TestErrorSpam/stop 5.61
41 TestFunctional/serial/CopySyncFile 0
42 TestFunctional/serial/StartWithProxy 88.58
43 TestFunctional/serial/AuditLog 0
44 TestFunctional/serial/SoftStart 38.53
45 TestFunctional/serial/KubeContext 0.03
46 TestFunctional/serial/KubectlGetPods 0.08
49 TestFunctional/serial/CacheCmd/cache/add_remote 4.31
50 TestFunctional/serial/CacheCmd/cache/add_local 1.58
51 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
52 TestFunctional/serial/CacheCmd/cache/list 0.06
53 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.17
54 TestFunctional/serial/CacheCmd/cache/cache_reload 1.36
55 TestFunctional/serial/CacheCmd/cache/delete 0.13
56 TestFunctional/serial/MinikubeKubectlCmd 0.53
57 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.72
58 TestFunctional/serial/ExtraConfig 36.24
59 TestFunctional/serial/ComponentHealth 0.05
60 TestFunctional/serial/LogsCmd 2.6
61 TestFunctional/serial/LogsFileCmd 2.9
62 TestFunctional/serial/InvalidService 4.35
64 TestFunctional/parallel/ConfigCmd 0.37
65 TestFunctional/parallel/DashboardCmd 10.26
66 TestFunctional/parallel/DryRun 1.16
67 TestFunctional/parallel/InternationalLanguage 0.92
68 TestFunctional/parallel/StatusCmd 0.59
72 TestFunctional/parallel/ServiceCmdConnect 18.39
73 TestFunctional/parallel/AddonsCmd 0.22
74 TestFunctional/parallel/PersistentVolumeClaim 27.75
76 TestFunctional/parallel/SSHCmd 0.28
77 TestFunctional/parallel/CpCmd 0.56
78 TestFunctional/parallel/MySQL 25.71
79 TestFunctional/parallel/FileSync 0.15
80 TestFunctional/parallel/CertSync 0.9
84 TestFunctional/parallel/NodeLabels 0.05
86 TestFunctional/parallel/NonActiveRuntimeDisabled 0.2
88 TestFunctional/parallel/License 0.42
90 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.35
91 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
93 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.17
94 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
95 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
96 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.03
97 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
98 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
99 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
100 TestFunctional/parallel/ServiceCmd/DeployApp 7.12
101 TestFunctional/parallel/ProfileCmd/profile_not_create 0.3
102 TestFunctional/parallel/ProfileCmd/profile_list 0.25
103 TestFunctional/parallel/ProfileCmd/profile_json_output 0.25
104 TestFunctional/parallel/MountCmd/any-port 5.97
105 TestFunctional/parallel/ServiceCmd/List 0.55
106 TestFunctional/parallel/MountCmd/specific-port 1.41
107 TestFunctional/parallel/ServiceCmd/JSONOutput 0.39
108 TestFunctional/parallel/ServiceCmd/HTTPS 0.23
109 TestFunctional/parallel/ServiceCmd/Format 0.25
110 TestFunctional/parallel/ServiceCmd/URL 0.24
111 TestFunctional/parallel/MountCmd/VerifyCleanup 1.97
112 TestFunctional/parallel/Version/short 0.08
113 TestFunctional/parallel/Version/components 0.44
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.14
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.14
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.16
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.15
118 TestFunctional/parallel/ImageCommands/ImageBuild 2.48
119 TestFunctional/parallel/ImageCommands/Setup 2.5
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.54
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.8
123 TestFunctional/parallel/DockerEnv/bash 0.72
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.07
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.35
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.11
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.36
131 TestFunctional/delete_addon-resizer_images 0.13
132 TestFunctional/delete_my-image_image 0.05
133 TestFunctional/delete_minikube_cached_images 0.05
137 TestImageBuild/serial/Setup 37.87
138 TestImageBuild/serial/NormalBuild 1.21
139 TestImageBuild/serial/BuildWithBuildArg 0.66
140 TestImageBuild/serial/BuildWithDockerIgnore 0.21
141 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.19
144 TestIngressAddonLegacy/StartLegacyK8sCluster 66.01
146 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 19.31
147 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.57
148 TestIngressAddonLegacy/serial/ValidateIngressAddons 36.54
151 TestJSONOutput/start/Command 65.3
152 TestJSONOutput/start/Audit 0
157 TestJSONOutput/pause/Command 0.44
158 TestJSONOutput/pause/Audit 0
160 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/unpause/Command 0.41
164 TestJSONOutput/unpause/Audit 0
166 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/stop/Command 8.14
170 TestJSONOutput/stop/Audit 0
172 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
174 TestErrorJSONOutput 0.7
179 TestMainNoArgs 0.06
183 TestMountStart/serial/StartWithMountFirst 16.31
184 TestMountStart/serial/VerifyMountFirst 0.26
185 TestMountStart/serial/StartWithMountSecond 16.16
186 TestMountStart/serial/VerifyMountSecond 0.27
187 TestMountStart/serial/DeleteFirst 2.31
188 TestMountStart/serial/VerifyMountPostDelete 0.27
189 TestMountStart/serial/Stop 2.23
190 TestMountStart/serial/RestartStopped 16.98
191 TestMountStart/serial/VerifyMountPostStop 0.28
194 TestMultiNode/serial/FreshStart2Nodes 96.18
195 TestMultiNode/serial/DeployApp2Nodes 4.55
196 TestMultiNode/serial/PingHostFrom2Pods 0.81
197 TestMultiNode/serial/AddNode 32.71
198 TestMultiNode/serial/ProfileList 0.18
199 TestMultiNode/serial/CopyFile 4.66
200 TestMultiNode/serial/StopNode 2.62
201 TestMultiNode/serial/StartAfterStop 27.17
202 TestMultiNode/serial/RestartKeepsNodes 124.1
203 TestMultiNode/serial/DeleteNode 2.91
204 TestMultiNode/serial/StopMultiNode 16.44
205 TestMultiNode/serial/RestartMultiNode 118.16
206 TestMultiNode/serial/ValidateNameConflict 44.42
210 TestPreload 165.83
212 TestScheduledStopUnix 106.19
213 TestSkaffold 108.29
216 TestRunningBinaryUpgrade 160.96
218 TestKubernetesUpgrade 141.34
231 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.31
232 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.57
233 TestStoppedBinaryUpgrade/Setup 0.88
234 TestStoppedBinaryUpgrade/Upgrade 152.74
236 TestPause/serial/Start 60.41
237 TestStoppedBinaryUpgrade/MinikubeLogs 2.51
246 TestNoKubernetes/serial/StartNoK8sWithVersion 0.54
247 TestNoKubernetes/serial/StartWithK8s 35.89
248 TestPause/serial/SecondStartNoReconfiguration 50.78
249 TestNoKubernetes/serial/StartWithStopK8s 7.3
250 TestNoKubernetes/serial/Start 18.11
251 TestPause/serial/Pause 0.49
252 TestPause/serial/VerifyStatus 0.14
253 TestPause/serial/Unpause 0.47
254 TestPause/serial/PauseAgain 0.57
255 TestPause/serial/DeletePaused 5.25
256 TestPause/serial/VerifyDeletedResources 0.16
257 TestNetworkPlugins/group/auto/Start 52.12
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.12
259 TestNoKubernetes/serial/ProfileList 0.37
260 TestNoKubernetes/serial/Stop 2.19
261 TestNoKubernetes/serial/StartNoArgs 21.46
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.12
263 TestNetworkPlugins/group/kindnet/Start 59.01
264 TestNetworkPlugins/group/auto/KubeletFlags 0.13
265 TestNetworkPlugins/group/auto/NetCatPod 12.19
266 TestNetworkPlugins/group/auto/DNS 0.13
267 TestNetworkPlugins/group/auto/Localhost 0.1
268 TestNetworkPlugins/group/auto/HairPin 0.1
269 TestNetworkPlugins/group/calico/Start 77.86
270 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
271 TestNetworkPlugins/group/kindnet/KubeletFlags 0.17
272 TestNetworkPlugins/group/kindnet/NetCatPod 12.19
273 TestNetworkPlugins/group/kindnet/DNS 0.13
274 TestNetworkPlugins/group/kindnet/Localhost 0.11
275 TestNetworkPlugins/group/kindnet/HairPin 0.11
276 TestNetworkPlugins/group/custom-flannel/Start 59
277 TestNetworkPlugins/group/calico/ControllerPod 5.02
278 TestNetworkPlugins/group/calico/KubeletFlags 0.15
279 TestNetworkPlugins/group/calico/NetCatPod 12.22
280 TestNetworkPlugins/group/calico/DNS 0.16
281 TestNetworkPlugins/group/calico/Localhost 0.11
282 TestNetworkPlugins/group/calico/HairPin 0.11
283 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.15
284 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.19
285 TestNetworkPlugins/group/custom-flannel/DNS 0.13
286 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
287 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
288 TestNetworkPlugins/group/false/Start 51.77
289 TestNetworkPlugins/group/enable-default-cni/Start 49.52
290 TestNetworkPlugins/group/false/KubeletFlags 0.17
291 TestNetworkPlugins/group/false/NetCatPod 13.18
292 TestNetworkPlugins/group/false/DNS 0.13
293 TestNetworkPlugins/group/false/Localhost 0.12
294 TestNetworkPlugins/group/false/HairPin 0.1
295 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.14
296 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.22
297 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
298 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
299 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
300 TestNetworkPlugins/group/flannel/Start 58.72
301 TestNetworkPlugins/group/bridge/Start 47.75
302 TestNetworkPlugins/group/flannel/ControllerPod 5.01
303 TestNetworkPlugins/group/bridge/KubeletFlags 0.13
304 TestNetworkPlugins/group/bridge/NetCatPod 13.21
305 TestNetworkPlugins/group/flannel/KubeletFlags 0.16
306 TestNetworkPlugins/group/flannel/NetCatPod 12.18
307 TestNetworkPlugins/group/flannel/DNS 0.13
308 TestNetworkPlugins/group/bridge/DNS 25.75
309 TestNetworkPlugins/group/flannel/Localhost 0.1
310 TestNetworkPlugins/group/flannel/HairPin 0.1
311 TestNetworkPlugins/group/kubenet/Start 78.82
312 TestNetworkPlugins/group/bridge/Localhost 0.11
313 TestNetworkPlugins/group/bridge/HairPin 0.11
315 TestStartStop/group/old-k8s-version/serial/FirstStart 151.01
316 TestNetworkPlugins/group/kubenet/KubeletFlags 0.14
317 TestNetworkPlugins/group/kubenet/NetCatPod 14.18
318 TestNetworkPlugins/group/kubenet/DNS 0.12
319 TestNetworkPlugins/group/kubenet/Localhost 0.1
320 TestNetworkPlugins/group/kubenet/HairPin 0.11
322 TestStartStop/group/no-preload/serial/FirstStart 58.57
323 TestStartStop/group/no-preload/serial/DeployApp 9.26
324 TestStartStop/group/old-k8s-version/serial/DeployApp 8.31
325 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.88
326 TestStartStop/group/no-preload/serial/Stop 8.28
327 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.62
328 TestStartStop/group/old-k8s-version/serial/Stop 8.23
329 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.28
330 TestStartStop/group/no-preload/serial/SecondStart 301.28
331 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.28
332 TestStartStop/group/old-k8s-version/serial/SecondStart 494.96
333 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
334 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
335 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.18
336 TestStartStop/group/no-preload/serial/Pause 1.82
338 TestStartStop/group/embed-certs/serial/FirstStart 46.79
339 TestStartStop/group/embed-certs/serial/DeployApp 11.26
340 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.81
341 TestStartStop/group/embed-certs/serial/Stop 8.23
342 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.28
343 TestStartStop/group/embed-certs/serial/SecondStart 297.95
344 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
345 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
347 TestStartStop/group/old-k8s-version/serial/Pause 1.7
349 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.02
350 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.26
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.82
352 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.23
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.28
354 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 299.66
355 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
356 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
357 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.18
358 TestStartStop/group/embed-certs/serial/Pause 1.79
360 TestStartStop/group/newest-cni/serial/FirstStart 47.94
361 TestStartStop/group/newest-cni/serial/DeployApp 0
362 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.87
363 TestStartStop/group/newest-cni/serial/Stop 8.24
364 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.28
365 TestStartStop/group/newest-cni/serial/SecondStart 35.69
366 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
367 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
368 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.17
369 TestStartStop/group/newest-cni/serial/Pause 1.74
370 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
371 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
372 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.18
373 TestStartStop/group/default-k8s-diff-port/serial/Pause 1.84
x
+
TestDownloadOnly/v1.16.0/json-events (15.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-811000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-811000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit : (15.151286697s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (15.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-811000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-811000: exit status 85 (301.129931ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-811000 | jenkins | v1.31.2 | 20 Sep 23 09:57 PDT |          |
	|         | -p download-only-811000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/20 09:57:09
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.21.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 09:57:09.719625    1786 out.go:296] Setting OutFile to fd 1 ...
	I0920 09:57:09.719870    1786 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0920 09:57:09.719875    1786 out.go:309] Setting ErrFile to fd 2...
	I0920 09:57:09.719879    1786 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0920 09:57:09.720057    1786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15927-1321/.minikube/bin
	W0920 09:57:09.720156    1786 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/15927-1321/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15927-1321/.minikube/config/config.json: no such file or directory
	I0920 09:57:09.721730    1786 out.go:303] Setting JSON to true
	I0920 09:57:09.743532    1786 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1603,"bootTime":1695227426,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0920 09:57:09.743628    1786 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0920 09:57:09.765775    1786 out.go:97] [download-only-811000] minikube v1.31.2 on Darwin 13.5.2
	I0920 09:57:09.786611    1786 out.go:169] MINIKUBE_LOCATION=15927
	I0920 09:57:09.766021    1786 notify.go:220] Checking for updates...
	W0920 09:57:09.766042    1786 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15927-1321/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 09:57:09.829846    1786 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15927-1321/kubeconfig
	I0920 09:57:09.850598    1786 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0920 09:57:09.892851    1786 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 09:57:09.934799    1786 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15927-1321/.minikube
	W0920 09:57:09.977764    1786 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 09:57:09.978200    1786 driver.go:373] Setting default libvirt URI to qemu:///system
	I0920 09:57:10.070638    1786 out.go:97] Using the hyperkit driver based on user configuration
	I0920 09:57:10.070673    1786 start.go:298] selected driver: hyperkit
	I0920 09:57:10.070682    1786 start.go:902] validating driver "hyperkit" against <nil>
	I0920 09:57:10.070850    1786 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 09:57:10.071139    1786 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15927-1321/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0920 09:57:10.212122    1786 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.31.2
	I0920 09:57:10.216262    1786 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 09:57:10.216296    1786 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0920 09:57:10.216341    1786 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0920 09:57:10.220550    1786 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0920 09:57:10.220706    1786 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 09:57:10.220734    1786 cni.go:84] Creating CNI manager for ""
	I0920 09:57:10.220747    1786 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 09:57:10.220755    1786 start_flags.go:321] config:
	{Name:download-only-811000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-811000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0920 09:57:10.221007    1786 iso.go:125] acquiring lock: {Name:mkeb4366e068e3c3b5036486999179c5031df1bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 09:57:10.242613    1786 out.go:97] Downloading VM boot image ...
	I0920 09:57:10.242742    1786 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/15927-1321/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I0920 09:57:15.809741    1786 out.go:97] Starting control plane node download-only-811000 in cluster download-only-811000
	I0920 09:57:15.809781    1786 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0920 09:57:15.866770    1786 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0920 09:57:15.866816    1786 cache.go:57] Caching tarball of preloaded images
	I0920 09:57:15.867163    1786 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0920 09:57:15.887495    1786 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0920 09:57:15.887521    1786 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0920 09:57:15.980101    1786 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/15927-1321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0920 09:57:22.070599    1786 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0920 09:57:22.070733    1786 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15927-1321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0920 09:57:22.607059    1786 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0920 09:57:22.607299    1786 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/download-only-811000/config.json ...
	I0920 09:57:22.607325    1786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/download-only-811000/config.json: {Name:mkdf621c439c79a9b002393d64df1e6c714b889a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 09:57:22.607640    1786 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0920 09:57:22.607953    1786 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15927-1321/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-811000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (10.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-811000 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-811000 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=hyperkit : (10.52818035s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (10.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
--- PASS: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-811000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-811000: exit status 85 (301.065666ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-811000 | jenkins | v1.31.2 | 20 Sep 23 09:57 PDT |          |
	|         | -p download-only-811000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-811000 | jenkins | v1.31.2 | 20 Sep 23 09:57 PDT |          |
	|         | -p download-only-811000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/20 09:57:25
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.21.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 09:57:25.176774    1804 out.go:296] Setting OutFile to fd 1 ...
	I0920 09:57:25.177031    1804 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0920 09:57:25.177036    1804 out.go:309] Setting ErrFile to fd 2...
	I0920 09:57:25.177041    1804 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0920 09:57:25.177214    1804 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15927-1321/.minikube/bin
	W0920 09:57:25.177309    1804 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/15927-1321/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15927-1321/.minikube/config/config.json: no such file or directory
	I0920 09:57:25.178548    1804 out.go:303] Setting JSON to true
	I0920 09:57:25.197935    1804 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1619,"bootTime":1695227426,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0920 09:57:25.198031    1804 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0920 09:57:25.219250    1804 out.go:97] [download-only-811000] minikube v1.31.2 on Darwin 13.5.2
	I0920 09:57:25.241154    1804 out.go:169] MINIKUBE_LOCATION=15927
	I0920 09:57:25.219491    1804 notify.go:220] Checking for updates...
	I0920 09:57:25.284217    1804 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15927-1321/kubeconfig
	I0920 09:57:25.305267    1804 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0920 09:57:25.326020    1804 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 09:57:25.347363    1804 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15927-1321/.minikube
	W0920 09:57:25.390171    1804 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 09:57:25.390929    1804 config.go:182] Loaded profile config "download-only-811000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0920 09:57:25.390999    1804 start.go:810] api.Load failed for download-only-811000: filestore "download-only-811000": Docker machine "download-only-811000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0920 09:57:25.391153    1804 driver.go:373] Setting default libvirt URI to qemu:///system
	W0920 09:57:25.391197    1804 start.go:810] api.Load failed for download-only-811000: filestore "download-only-811000": Docker machine "download-only-811000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0920 09:57:25.419276    1804 out.go:97] Using the hyperkit driver based on existing profile
	I0920 09:57:25.419321    1804 start.go:298] selected driver: hyperkit
	I0920 09:57:25.419334    1804 start.go:902] validating driver "hyperkit" against &{Name:download-only-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-811000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0920 09:57:25.419651    1804 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 09:57:25.419881    1804 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15927-1321/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0920 09:57:25.427984    1804 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.31.2
	I0920 09:57:25.431398    1804 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 09:57:25.431416    1804 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0920 09:57:25.433750    1804 cni.go:84] Creating CNI manager for ""
	I0920 09:57:25.433771    1804 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 09:57:25.433786    1804 start_flags.go:321] config:
	{Name:download-only-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-811000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0920 09:57:25.433935    1804 iso.go:125] acquiring lock: {Name:mkeb4366e068e3c3b5036486999179c5031df1bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 09:57:25.455270    1804 out.go:97] Starting control plane node download-only-811000 in cluster download-only-811000
	I0920 09:57:25.455305    1804 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0920 09:57:25.512826    1804 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I0920 09:57:25.512882    1804 cache.go:57] Caching tarball of preloaded images
	I0920 09:57:25.513267    1804 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0920 09:57:25.534744    1804 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I0920 09:57:25.534788    1804 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 ...
	I0920 09:57:25.637473    1804 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4?checksum=md5:30a5cb95ef165c1e9196502a3ab2be2b -> /Users/jenkins/minikube-integration/15927-1321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I0920 09:57:32.424864    1804 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 ...
	I0920 09:57:32.425056    1804 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15927-1321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 ...
	I0920 09:57:33.036745    1804 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0920 09:57:33.036819    1804 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/download-only-811000/config.json ...
	I0920 09:57:33.037133    1804 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0920 09:57:33.037381    1804 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/15927-1321/.minikube/cache/darwin/amd64/v1.28.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-811000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.37s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-811000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.35s)

                                                
                                    
x
+
TestBinaryMirror (0.96s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-550000 --alsologtostderr --binary-mirror http://127.0.0.1:49365 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-550000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-550000
--- PASS: TestBinaryMirror (0.96s)

                                                
                                    
x
+
TestOffline (54.54s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-590000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-590000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : (49.18801333s)
helpers_test.go:175: Cleaning up "offline-docker-590000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-590000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-590000: (5.351788784s)
--- PASS: TestOffline (54.54s)

                                                
                                    
x
+
TestCertOptions (37.8s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-915000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-915000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : (34.110696895s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-915000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-915000 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-915000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-915000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-915000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-915000: (3.38178292s)
--- PASS: TestCertOptions (37.80s)

                                                
                                    
x
+
TestCertExpiration (250.52s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-479000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-479000 --memory=2048 --cert-expiration=3m --driver=hyperkit : (33.287515431s)
E0920 10:25:19.006530    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-479000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E0920 10:28:29.255717    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/skaffold-273000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-479000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (31.922913487s)
helpers_test.go:175: Cleaning up "cert-expiration-479000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-479000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-479000: (5.306966752s)
--- PASS: TestCertExpiration (250.52s)

                                                
                                    
x
+
TestDockerFlags (45.43s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-841000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-841000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : (39.838013795s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-841000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-841000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-841000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-841000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-841000: (5.31794826s)
--- PASS: TestDockerFlags (45.43s)

                                                
                                    
x
+
TestForceSystemdFlag (40.01s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-857000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-857000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : (34.605490089s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-857000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-857000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-857000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-857000: (5.248019375s)
--- PASS: TestForceSystemdFlag (40.01s)

                                                
                                    
x
+
TestForceSystemdEnv (38.67s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-008000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-008000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : (33.1814415s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-008000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-008000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-008000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-008000: (5.335407069s)
--- PASS: TestForceSystemdEnv (38.67s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.24s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.24s)

                                                
                                    
x
+
TestErrorSpam/setup (34s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-719000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-719000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-719000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-719000 --driver=hyperkit : (34.002604581s)
--- PASS: TestErrorSpam/setup (34.00s)

                                                
                                    
x
+
TestErrorSpam/start (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-719000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-719000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-719000 start --dry-run
--- PASS: TestErrorSpam/start (1.43s)

                                                
                                    
x
+
TestErrorSpam/status (0.43s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-719000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-719000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-719000 status
--- PASS: TestErrorSpam/status (0.43s)

                                                
                                    
x
+
TestErrorSpam/pause (1.21s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-719000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-719000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-719000 pause
--- PASS: TestErrorSpam/pause (1.21s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.27s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-719000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-719000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-719000 unpause
--- PASS: TestErrorSpam/unpause (1.27s)

                                                
                                    
x
+
TestErrorSpam/stop (5.61s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-719000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-719000 stop: (5.220196787s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-719000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-719000 stop
--- PASS: TestErrorSpam/stop (5.61s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/15927-1321/.minikube/files/etc/test/nested/copy/1784/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (88.58s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-477000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-477000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (1m28.577673134s)
--- PASS: TestFunctional/serial/StartWithProxy (88.58s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.53s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-477000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-477000 --alsologtostderr -v=8: (38.53028535s)
functional_test.go:659: soft start took 38.530787889s for "functional-477000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.53s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-477000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-477000 cache add registry.k8s.io/pause:3.1: (1.56595894s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-477000 cache add registry.k8s.io/pause:3.3: (1.41899314s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-477000 cache add registry.k8s.io/pause:latest: (1.325120256s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-477000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local2240675962/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 cache add minikube-local-cache-test:functional-477000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 cache delete minikube-local-cache-test:functional-477000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-477000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-477000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (139.64289ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 kubectl -- --context functional-477000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.72s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-477000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.72s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.24s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-477000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-477000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.239247599s)
functional_test.go:757: restart took 36.239406418s for "functional-477000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.24s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-477000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-477000 logs: (2.599167855s)
--- PASS: TestFunctional/serial/LogsCmd (2.60s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.9s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd1135870472/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-477000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd1135870472/001/logs.txt: (2.899955678s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.90s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.35s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-477000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-477000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-477000: exit status 115 (256.269002ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.64.4:31758 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-477000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.35s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-477000 config get cpus: exit status 14 (39.769051ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-477000 config get cpus: exit status 14 (40.032928ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-477000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-477000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2646: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.26s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-477000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-477000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (632.845377ms)

                                                
                                                
-- stdout --
	* [functional-477000] minikube v1.31.2 on Darwin 13.5.2
	  - MINIKUBE_LOCATION=15927
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15927-1321/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15927-1321/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:02:24.028291    2596 out.go:296] Setting OutFile to fd 1 ...
	I0920 10:02:24.028532    2596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0920 10:02:24.028539    2596 out.go:309] Setting ErrFile to fd 2...
	I0920 10:02:24.028543    2596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0920 10:02:24.028721    2596 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15927-1321/.minikube/bin
	I0920 10:02:24.030285    2596 out.go:303] Setting JSON to false
	I0920 10:02:24.051284    2596 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1918,"bootTime":1695227426,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0920 10:02:24.051389    2596 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0920 10:02:24.072397    2596 out.go:177] * [functional-477000] minikube v1.31.2 on Darwin 13.5.2
	I0920 10:02:24.151457    2596 out.go:177]   - MINIKUBE_LOCATION=15927
	I0920 10:02:24.114400    2596 notify.go:220] Checking for updates...
	I0920 10:02:24.193326    2596 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15927-1321/kubeconfig
	I0920 10:02:24.235289    2596 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0920 10:02:24.277323    2596 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:02:24.340292    2596 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15927-1321/.minikube
	I0920 10:02:24.384308    2596 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:02:24.423532    2596 config.go:182] Loaded profile config "functional-477000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0920 10:02:24.424077    2596 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:02:24.424160    2596 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:02:24.431408    2596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50078
	I0920 10:02:24.431834    2596 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:02:24.432266    2596 main.go:141] libmachine: Using API Version  1
	I0920 10:02:24.432280    2596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:02:24.432496    2596 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:02:24.432610    2596 main.go:141] libmachine: (functional-477000) Calling .DriverName
	I0920 10:02:24.432812    2596 driver.go:373] Setting default libvirt URI to qemu:///system
	I0920 10:02:24.433065    2596 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:02:24.433089    2596 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:02:24.440322    2596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50080
	I0920 10:02:24.440722    2596 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:02:24.441082    2596 main.go:141] libmachine: Using API Version  1
	I0920 10:02:24.441107    2596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:02:24.441336    2596 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:02:24.441456    2596 main.go:141] libmachine: (functional-477000) Calling .DriverName
	I0920 10:02:24.469424    2596 out.go:177] * Using the hyperkit driver based on existing profile
	I0920 10:02:24.511479    2596 start.go:298] selected driver: hyperkit
	I0920 10:02:24.511496    2596 start.go:902] validating driver "hyperkit" against &{Name:functional-477000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.64.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0920 10:02:24.511628    2596 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:02:24.535428    2596 out.go:177] 
	W0920 10:02:24.556465    2596 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 10:02:24.577468    2596 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-477000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-477000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-477000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (916.001586ms)

                                                
                                                
-- stdout --
	* [functional-477000] minikube v1.31.2 sur Darwin 13.5.2
	  - MINIKUBE_LOCATION=15927
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15927-1321/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15927-1321/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:02:23.135836    2569 out.go:296] Setting OutFile to fd 1 ...
	I0920 10:02:23.156954    2569 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0920 10:02:23.157002    2569 out.go:309] Setting ErrFile to fd 2...
	I0920 10:02:23.157023    2569 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0920 10:02:23.157422    2569 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15927-1321/.minikube/bin
	I0920 10:02:23.198906    2569 out.go:303] Setting JSON to false
	I0920 10:02:23.218978    2569 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1917,"bootTime":1695227426,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0920 10:02:23.219082    2569 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0920 10:02:23.261494    2569 out.go:177] * [functional-477000] minikube v1.31.2 sur Darwin 13.5.2
	I0920 10:02:23.325300    2569 out.go:177]   - MINIKUBE_LOCATION=15927
	I0920 10:02:23.304482    2569 notify.go:220] Checking for updates...
	I0920 10:02:23.493453    2569 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15927-1321/kubeconfig
	I0920 10:02:23.556493    2569 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0920 10:02:23.640253    2569 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:02:23.724513    2569 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15927-1321/.minikube
	I0920 10:02:23.766536    2569 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:02:23.788111    2569 config.go:182] Loaded profile config "functional-477000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0920 10:02:23.788590    2569 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:02:23.788642    2569 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:02:23.796040    2569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50059
	I0920 10:02:23.796426    2569 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:02:23.796917    2569 main.go:141] libmachine: Using API Version  1
	I0920 10:02:23.796935    2569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:02:23.797156    2569 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:02:23.797276    2569 main.go:141] libmachine: (functional-477000) Calling .DriverName
	I0920 10:02:23.797494    2569 driver.go:373] Setting default libvirt URI to qemu:///system
	I0920 10:02:23.797759    2569 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:02:23.797782    2569 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:02:23.804918    2569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50061
	I0920 10:02:23.805279    2569 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:02:23.805626    2569 main.go:141] libmachine: Using API Version  1
	I0920 10:02:23.805637    2569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:02:23.805856    2569 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:02:23.805968    2569 main.go:141] libmachine: (functional-477000) Calling .DriverName
	I0920 10:02:23.833488    2569 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0920 10:02:23.875508    2569 start.go:298] selected driver: hyperkit
	I0920 10:02:23.875525    2569 start.go:902] validating driver "hyperkit" against &{Name:functional-477000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.64.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0920 10:02:23.875678    2569 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:02:23.900514    2569 out.go:177] 
	W0920 10:02:23.921575    2569 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 10:02:23.944322    2569 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (18.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-477000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-477000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-5gv7l" [6db0fe9f-76fc-4e9c-adc6-184dfe9b0b54] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-5gv7l" [6db0fe9f-76fc-4e9c-adc6-184dfe9b0b54] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 18.016034801s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.64.4:32634
functional_test.go:1674: http://192.168.64.4:32634: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-5gv7l

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.64.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.64.4:32634
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (18.39s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [83ff6518-d71e-4b6c-bba4-717b4ab0bab0] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.01604998s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-477000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-477000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-477000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-477000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ae4521c5-8620-4c21-94df-ef7db50e461c] Pending
helpers_test.go:344: "sp-pod" [ae4521c5-8620-4c21-94df-ef7db50e461c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ae4521c5-8620-4c21-94df-ef7db50e461c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.008554245s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-477000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-477000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-477000 delete -f testdata/storage-provisioner/pod.yaml: (1.034941052s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-477000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cf141c48-3d53-4481-8452-d2bde0e47c5b] Pending
helpers_test.go:344: "sp-pod" [cf141c48-3d53-4481-8452-d2bde0e47c5b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cf141c48-3d53-4481-8452-d2bde0e47c5b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.010829524s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-477000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.75s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh -n functional-477000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 cp functional-477000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd3239374516/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh -n functional-477000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-477000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-qb7m5" [755ba01a-1a37-4424-a34e-913dbcdc4918] Pending
helpers_test.go:344: "mysql-859648c796-qb7m5" [755ba01a-1a37-4424-a34e-913dbcdc4918] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-qb7m5" [755ba01a-1a37-4424-a34e-913dbcdc4918] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.010206352s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-477000 exec mysql-859648c796-qb7m5 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-477000 exec mysql-859648c796-qb7m5 -- mysql -ppassword -e "show databases;": exit status 1 (116.914871ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-477000 exec mysql-859648c796-qb7m5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.71s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1784/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh "sudo cat /etc/test/nested/copy/1784/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1784.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh "sudo cat /etc/ssl/certs/1784.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1784.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh "sudo cat /usr/share/ca-certificates/1784.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/17842.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh "sudo cat /etc/ssl/certs/17842.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/17842.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh "sudo cat /usr/share/ca-certificates/17842.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-477000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-477000 ssh "sudo systemctl is-active crio": exit status 1 (198.814467ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-477000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-477000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-477000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2342: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-477000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-477000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-477000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [64aec62b-e7dc-46fe-ab09-754e89be87e0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [64aec62b-e7dc-46fe-ab09-754e89be87e0] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.012046641s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-477000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.13.234 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-477000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-477000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-477000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-8cndl" [726452f9-780a-4c8f-8442-31ef09f31a5c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-8cndl" [726452f9-780a-4c8f-8442-31ef09f31a5c] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.012219189s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1314: Took "189.497162ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1328: Took "60.790768ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1365: Took "185.069867ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1378: Took "61.464576ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-477000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port848627372/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1695229335068261000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port848627372/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1695229335068261000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port848627372/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1695229335068261000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port848627372/001/test-1695229335068261000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-477000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (137.247661ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 20 17:02 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 20 17:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 20 17:02 test-1695229335068261000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh cat /mount-9p/test-1695229335068261000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-477000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d5784a83-671f-4b33-ad4f-c616377d4a26] Pending
helpers_test.go:344: "busybox-mount" [d5784a83-671f-4b33-ad4f-c616377d4a26] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d5784a83-671f-4b33-ad4f-c616377d4a26] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d5784a83-671f-4b33-ad4f-c616377d4a26] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.011551834s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-477000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-477000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port848627372/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-477000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1678036332/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-477000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (136.336828ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-477000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1678036332/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-477000 ssh "sudo umount -f /mount-9p": exit status 1 (114.581643ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-477000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-477000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1678036332/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 service list -o json
functional_test.go:1493: Took "388.020669ms" to run "out/minikube-darwin-amd64 -p functional-477000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.64.4:30989
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.64.4:30989
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-477000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1628728903/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-477000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1628728903/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-477000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1628728903/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-477000 ssh "findmnt -T" /mount1: exit status 1 (143.071807ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-477000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-477000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1628728903/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-477000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1628728903/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-477000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1628728903/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-477000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-477000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-477000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-477000 image ls --format short --alsologtostderr:
I0920 10:02:44.613850    2825 out.go:296] Setting OutFile to fd 1 ...
I0920 10:02:44.614038    2825 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0920 10:02:44.614043    2825 out.go:309] Setting ErrFile to fd 2...
I0920 10:02:44.614047    2825 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0920 10:02:44.614228    2825 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15927-1321/.minikube/bin
I0920 10:02:44.614848    2825 config.go:182] Loaded profile config "functional-477000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0920 10:02:44.614947    2825 config.go:182] Loaded profile config "functional-477000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0920 10:02:44.615300    2825 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0920 10:02:44.615355    2825 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0920 10:02:44.622286    2825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50306
I0920 10:02:44.622680    2825 main.go:141] libmachine: () Calling .GetVersion
I0920 10:02:44.623171    2825 main.go:141] libmachine: Using API Version  1
I0920 10:02:44.623189    2825 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 10:02:44.623400    2825 main.go:141] libmachine: () Calling .GetMachineName
I0920 10:02:44.623512    2825 main.go:141] libmachine: (functional-477000) Calling .GetState
I0920 10:02:44.623604    2825 main.go:141] libmachine: (functional-477000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0920 10:02:44.623663    2825 main.go:141] libmachine: (functional-477000) DBG | hyperkit pid from json: 2008
I0920 10:02:44.624894    2825 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0920 10:02:44.624931    2825 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0920 10:02:44.631797    2825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50308
I0920 10:02:44.632146    2825 main.go:141] libmachine: () Calling .GetVersion
I0920 10:02:44.632508    2825 main.go:141] libmachine: Using API Version  1
I0920 10:02:44.632522    2825 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 10:02:44.632722    2825 main.go:141] libmachine: () Calling .GetMachineName
I0920 10:02:44.632817    2825 main.go:141] libmachine: (functional-477000) Calling .DriverName
I0920 10:02:44.632964    2825 ssh_runner.go:195] Run: systemctl --version
I0920 10:02:44.632984    2825 main.go:141] libmachine: (functional-477000) Calling .GetSSHHostname
I0920 10:02:44.633059    2825 main.go:141] libmachine: (functional-477000) Calling .GetSSHPort
I0920 10:02:44.633136    2825 main.go:141] libmachine: (functional-477000) Calling .GetSSHKeyPath
I0920 10:02:44.633228    2825 main.go:141] libmachine: (functional-477000) Calling .GetSSHUsername
I0920 10:02:44.633311    2825 sshutil.go:53] new ssh client: &{IP:192.168.64.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/functional-477000/id_rsa Username:docker}
I0920 10:02:44.676518    2825 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0920 10:02:44.694824    2825 main.go:141] libmachine: Making call to close driver server
I0920 10:02:44.694833    2825 main.go:141] libmachine: (functional-477000) Calling .Close
I0920 10:02:44.695004    2825 main.go:141] libmachine: (functional-477000) DBG | Closing plugin on server side
I0920 10:02:44.695012    2825 main.go:141] libmachine: Successfully made call to close driver server
I0920 10:02:44.695029    2825 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 10:02:44.695041    2825 main.go:141] libmachine: Making call to close driver server
I0920 10:02:44.695048    2825 main.go:141] libmachine: (functional-477000) Calling .Close
I0920 10:02:44.695206    2825 main.go:141] libmachine: Successfully made call to close driver server
I0920 10:02:44.695216    2825 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 10:02:44.695222    2825 main.go:141] libmachine: (functional-477000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-477000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.28.2           | c120fed2beb84 | 73.1MB |
| registry.k8s.io/kube-scheduler              | v1.28.2           | 7a5d9d67a13f6 | 60.1MB |
| docker.io/library/nginx                     | alpine            | 433dbc17191a7 | 42.6MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-controller-manager     | v1.28.2           | 55f13c92defb1 | 122MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/google-containers/addon-resizer      | functional-477000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/kube-apiserver              | v1.28.2           | cdcab12b2dd16 | 126MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/minikube-local-cache-test | functional-477000 | 952b2c0665b44 | 30B    |
| docker.io/library/nginx                     | latest            | f5a6b296b8a29 | 187MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-477000 image ls --format table --alsologtostderr:
I0920 10:02:45.070688    2837 out.go:296] Setting OutFile to fd 1 ...
I0920 10:02:45.071238    2837 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0920 10:02:45.071247    2837 out.go:309] Setting ErrFile to fd 2...
I0920 10:02:45.071253    2837 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0920 10:02:45.071863    2837 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15927-1321/.minikube/bin
I0920 10:02:45.072526    2837 config.go:182] Loaded profile config "functional-477000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0920 10:02:45.072622    2837 config.go:182] Loaded profile config "functional-477000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0920 10:02:45.072970    2837 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0920 10:02:45.073027    2837 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0920 10:02:45.080047    2837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50321
I0920 10:02:45.080455    2837 main.go:141] libmachine: () Calling .GetVersion
I0920 10:02:45.080900    2837 main.go:141] libmachine: Using API Version  1
I0920 10:02:45.080925    2837 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 10:02:45.081142    2837 main.go:141] libmachine: () Calling .GetMachineName
I0920 10:02:45.081251    2837 main.go:141] libmachine: (functional-477000) Calling .GetState
I0920 10:02:45.081327    2837 main.go:141] libmachine: (functional-477000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0920 10:02:45.081389    2837 main.go:141] libmachine: (functional-477000) DBG | hyperkit pid from json: 2008
I0920 10:02:45.082601    2837 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0920 10:02:45.082623    2837 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0920 10:02:45.089510    2837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50323
I0920 10:02:45.089851    2837 main.go:141] libmachine: () Calling .GetVersion
I0920 10:02:45.090197    2837 main.go:141] libmachine: Using API Version  1
I0920 10:02:45.090209    2837 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 10:02:45.090412    2837 main.go:141] libmachine: () Calling .GetMachineName
I0920 10:02:45.090504    2837 main.go:141] libmachine: (functional-477000) Calling .DriverName
I0920 10:02:45.090654    2837 ssh_runner.go:195] Run: systemctl --version
I0920 10:02:45.090673    2837 main.go:141] libmachine: (functional-477000) Calling .GetSSHHostname
I0920 10:02:45.090753    2837 main.go:141] libmachine: (functional-477000) Calling .GetSSHPort
I0920 10:02:45.090825    2837 main.go:141] libmachine: (functional-477000) Calling .GetSSHKeyPath
I0920 10:02:45.090900    2837 main.go:141] libmachine: (functional-477000) Calling .GetSSHUsername
I0920 10:02:45.090977    2837 sshutil.go:53] new ssh client: &{IP:192.168.64.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/functional-477000/id_rsa Username:docker}
I0920 10:02:45.131117    2837 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0920 10:02:45.150868    2837 main.go:141] libmachine: Making call to close driver server
I0920 10:02:45.150878    2837 main.go:141] libmachine: (functional-477000) Calling .Close
I0920 10:02:45.151028    2837 main.go:141] libmachine: Successfully made call to close driver server
I0920 10:02:45.151035    2837 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 10:02:45.151042    2837 main.go:141] libmachine: Making call to close driver server
I0920 10:02:45.151047    2837 main.go:141] libmachine: (functional-477000) Calling .Close
I0920 10:02:45.151231    2837 main.go:141] libmachine: Successfully made call to close driver server
I0920 10:02:45.151246    2837 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 10:02:45.151271    2837 main.go:141] libmachine: (functional-477000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-477000 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-477000"],"size":"32900000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"126000000"},{"id":"c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"73100000"},{"id":"55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"122000000"},{"id":"f5a6b296b
8a29b4e3d89ffa99e4a86309874ae400e82b3d3993f84e1e3bb0eb9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"60100000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests"
:[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"952b2c0665b446964e9b6745b89f2e467e0f51e5b417baf3e745dbb7bf440b33","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-477000"],"size":"30"},{"id":"433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:late
st"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-477000 image ls --format json --alsologtostderr:
I0920 10:02:44.759701    2829 out.go:296] Setting OutFile to fd 1 ...
I0920 10:02:44.759956    2829 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0920 10:02:44.759961    2829 out.go:309] Setting ErrFile to fd 2...
I0920 10:02:44.759966    2829 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0920 10:02:44.760142    2829 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15927-1321/.minikube/bin
I0920 10:02:44.760805    2829 config.go:182] Loaded profile config "functional-477000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0920 10:02:44.760905    2829 config.go:182] Loaded profile config "functional-477000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0920 10:02:44.761265    2829 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0920 10:02:44.761320    2829 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0920 10:02:44.768235    2829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50311
I0920 10:02:44.768628    2829 main.go:141] libmachine: () Calling .GetVersion
I0920 10:02:44.769055    2829 main.go:141] libmachine: Using API Version  1
I0920 10:02:44.769085    2829 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 10:02:44.769331    2829 main.go:141] libmachine: () Calling .GetMachineName
I0920 10:02:44.769458    2829 main.go:141] libmachine: (functional-477000) Calling .GetState
I0920 10:02:44.769551    2829 main.go:141] libmachine: (functional-477000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0920 10:02:44.769604    2829 main.go:141] libmachine: (functional-477000) DBG | hyperkit pid from json: 2008
I0920 10:02:44.770805    2829 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0920 10:02:44.770830    2829 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0920 10:02:44.777524    2829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50313
I0920 10:02:44.777870    2829 main.go:141] libmachine: () Calling .GetVersion
I0920 10:02:44.778246    2829 main.go:141] libmachine: Using API Version  1
I0920 10:02:44.778265    2829 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 10:02:44.778455    2829 main.go:141] libmachine: () Calling .GetMachineName
I0920 10:02:44.778569    2829 main.go:141] libmachine: (functional-477000) Calling .DriverName
I0920 10:02:44.778726    2829 ssh_runner.go:195] Run: systemctl --version
I0920 10:02:44.778745    2829 main.go:141] libmachine: (functional-477000) Calling .GetSSHHostname
I0920 10:02:44.778837    2829 main.go:141] libmachine: (functional-477000) Calling .GetSSHPort
I0920 10:02:44.778921    2829 main.go:141] libmachine: (functional-477000) Calling .GetSSHKeyPath
I0920 10:02:44.779002    2829 main.go:141] libmachine: (functional-477000) Calling .GetSSHUsername
I0920 10:02:44.779078    2829 sshutil.go:53] new ssh client: &{IP:192.168.64.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/functional-477000/id_rsa Username:docker}
I0920 10:02:44.819873    2829 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0920 10:02:44.844280    2829 main.go:141] libmachine: Making call to close driver server
I0920 10:02:44.844289    2829 main.go:141] libmachine: (functional-477000) Calling .Close
I0920 10:02:44.844495    2829 main.go:141] libmachine: (functional-477000) DBG | Closing plugin on server side
I0920 10:02:44.844500    2829 main.go:141] libmachine: Successfully made call to close driver server
I0920 10:02:44.844516    2829 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 10:02:44.844525    2829 main.go:141] libmachine: Making call to close driver server
I0920 10:02:44.844530    2829 main.go:141] libmachine: (functional-477000) Calling .Close
I0920 10:02:44.844672    2829 main.go:141] libmachine: (functional-477000) DBG | Closing plugin on server side
I0920 10:02:44.844686    2829 main.go:141] libmachine: Successfully made call to close driver server
I0920 10:02:44.844702    2829 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-477000 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "122000000"
- id: 7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "60100000"
- id: 433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-477000
size: "32900000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "73100000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "126000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 952b2c0665b446964e9b6745b89f2e467e0f51e5b417baf3e745dbb7bf440b33
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-477000
size: "30"
- id: f5a6b296b8a29b4e3d89ffa99e4a86309874ae400e82b3d3993f84e1e3bb0eb9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-477000 image ls --format yaml --alsologtostderr:
I0920 10:02:44.921659    2833 out.go:296] Setting OutFile to fd 1 ...
I0920 10:02:44.921942    2833 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0920 10:02:44.921947    2833 out.go:309] Setting ErrFile to fd 2...
I0920 10:02:44.921951    2833 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0920 10:02:44.922126    2833 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15927-1321/.minikube/bin
I0920 10:02:44.922734    2833 config.go:182] Loaded profile config "functional-477000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0920 10:02:44.922828    2833 config.go:182] Loaded profile config "functional-477000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0920 10:02:44.923177    2833 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0920 10:02:44.923230    2833 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0920 10:02:44.930000    2833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50316
I0920 10:02:44.930392    2833 main.go:141] libmachine: () Calling .GetVersion
I0920 10:02:44.930820    2833 main.go:141] libmachine: Using API Version  1
I0920 10:02:44.930832    2833 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 10:02:44.931105    2833 main.go:141] libmachine: () Calling .GetMachineName
I0920 10:02:44.931239    2833 main.go:141] libmachine: (functional-477000) Calling .GetState
I0920 10:02:44.931340    2833 main.go:141] libmachine: (functional-477000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0920 10:02:44.931422    2833 main.go:141] libmachine: (functional-477000) DBG | hyperkit pid from json: 2008
I0920 10:02:44.932647    2833 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0920 10:02:44.932672    2833 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0920 10:02:44.939529    2833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50318
I0920 10:02:44.939896    2833 main.go:141] libmachine: () Calling .GetVersion
I0920 10:02:44.940265    2833 main.go:141] libmachine: Using API Version  1
I0920 10:02:44.940282    2833 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 10:02:44.940482    2833 main.go:141] libmachine: () Calling .GetMachineName
I0920 10:02:44.940572    2833 main.go:141] libmachine: (functional-477000) Calling .DriverName
I0920 10:02:44.940718    2833 ssh_runner.go:195] Run: systemctl --version
I0920 10:02:44.940738    2833 main.go:141] libmachine: (functional-477000) Calling .GetSSHHostname
I0920 10:02:44.940823    2833 main.go:141] libmachine: (functional-477000) Calling .GetSSHPort
I0920 10:02:44.940914    2833 main.go:141] libmachine: (functional-477000) Calling .GetSSHKeyPath
I0920 10:02:44.940999    2833 main.go:141] libmachine: (functional-477000) Calling .GetSSHUsername
I0920 10:02:44.941084    2833 sshutil.go:53] new ssh client: &{IP:192.168.64.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/functional-477000/id_rsa Username:docker}
I0920 10:02:44.981428    2833 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0920 10:02:45.006947    2833 main.go:141] libmachine: Making call to close driver server
I0920 10:02:45.006956    2833 main.go:141] libmachine: (functional-477000) Calling .Close
I0920 10:02:45.007118    2833 main.go:141] libmachine: Successfully made call to close driver server
I0920 10:02:45.007129    2833 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 10:02:45.007136    2833 main.go:141] libmachine: Making call to close driver server
I0920 10:02:45.007141    2833 main.go:141] libmachine: (functional-477000) Calling .Close
I0920 10:02:45.007278    2833 main.go:141] libmachine: (functional-477000) DBG | Closing plugin on server side
I0920 10:02:45.007290    2833 main.go:141] libmachine: Successfully made call to close driver server
I0920 10:02:45.007298    2833 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-477000 ssh pgrep buildkitd: exit status 1 (111.216314ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 image build -t localhost/my-image:functional-477000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-477000 image build -t localhost/my-image:functional-477000 testdata/build --alsologtostderr: (2.217374154s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-477000 image build -t localhost/my-image:functional-477000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in c4d024fe6fef
Removing intermediate container c4d024fe6fef
---> ede18b349e9c
Step 3/3 : ADD content.txt /
---> 874d13e06117
Successfully built 874d13e06117
Successfully tagged localhost/my-image:functional-477000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-477000 image build -t localhost/my-image:functional-477000 testdata/build --alsologtostderr:
I0920 10:02:45.325230    2846 out.go:296] Setting OutFile to fd 1 ...
I0920 10:02:45.325583    2846 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0920 10:02:45.325589    2846 out.go:309] Setting ErrFile to fd 2...
I0920 10:02:45.325593    2846 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0920 10:02:45.325775    2846 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15927-1321/.minikube/bin
I0920 10:02:45.326401    2846 config.go:182] Loaded profile config "functional-477000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0920 10:02:45.327044    2846 config.go:182] Loaded profile config "functional-477000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0920 10:02:45.327407    2846 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0920 10:02:45.327447    2846 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0920 10:02:45.334283    2846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50333
I0920 10:02:45.334674    2846 main.go:141] libmachine: () Calling .GetVersion
I0920 10:02:45.335100    2846 main.go:141] libmachine: Using API Version  1
I0920 10:02:45.335112    2846 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 10:02:45.335319    2846 main.go:141] libmachine: () Calling .GetMachineName
I0920 10:02:45.335431    2846 main.go:141] libmachine: (functional-477000) Calling .GetState
I0920 10:02:45.335516    2846 main.go:141] libmachine: (functional-477000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0920 10:02:45.335581    2846 main.go:141] libmachine: (functional-477000) DBG | hyperkit pid from json: 2008
I0920 10:02:45.336799    2846 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0920 10:02:45.336820    2846 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0920 10:02:45.343562    2846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50335
I0920 10:02:45.343909    2846 main.go:141] libmachine: () Calling .GetVersion
I0920 10:02:45.344264    2846 main.go:141] libmachine: Using API Version  1
I0920 10:02:45.344282    2846 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 10:02:45.344523    2846 main.go:141] libmachine: () Calling .GetMachineName
I0920 10:02:45.344644    2846 main.go:141] libmachine: (functional-477000) Calling .DriverName
I0920 10:02:45.344796    2846 ssh_runner.go:195] Run: systemctl --version
I0920 10:02:45.344815    2846 main.go:141] libmachine: (functional-477000) Calling .GetSSHHostname
I0920 10:02:45.344901    2846 main.go:141] libmachine: (functional-477000) Calling .GetSSHPort
I0920 10:02:45.344993    2846 main.go:141] libmachine: (functional-477000) Calling .GetSSHKeyPath
I0920 10:02:45.345076    2846 main.go:141] libmachine: (functional-477000) Calling .GetSSHUsername
I0920 10:02:45.345171    2846 sshutil.go:53] new ssh client: &{IP:192.168.64.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/functional-477000/id_rsa Username:docker}
I0920 10:02:45.387787    2846 build_images.go:151] Building image from path: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.1794626604.tar
I0920 10:02:45.387873    2846 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0920 10:02:45.400522    2846 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1794626604.tar
I0920 10:02:45.404123    2846 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1794626604.tar: stat -c "%s %y" /var/lib/minikube/build/build.1794626604.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1794626604.tar': No such file or directory
I0920 10:02:45.404162    2846 ssh_runner.go:362] scp /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.1794626604.tar --> /var/lib/minikube/build/build.1794626604.tar (3072 bytes)
I0920 10:02:45.433930    2846 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1794626604
I0920 10:02:45.441548    2846 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1794626604 -xf /var/lib/minikube/build/build.1794626604.tar
I0920 10:02:45.449199    2846 docker.go:340] Building image: /var/lib/minikube/build/build.1794626604
I0920 10:02:45.449269    2846 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-477000 /var/lib/minikube/build/build.1794626604
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0920 10:02:47.457536    2846 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-477000 /var/lib/minikube/build/build.1794626604: (2.008248051s)
I0920 10:02:47.457598    2846 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1794626604
I0920 10:02:47.465101    2846 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1794626604.tar
I0920 10:02:47.472058    2846 build_images.go:207] Built localhost/my-image:functional-477000 from /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.1794626604.tar
I0920 10:02:47.472080    2846 build_images.go:123] succeeded building to: functional-477000
I0920 10:02:47.472084    2846 build_images.go:124] failed building to: 
I0920 10:02:47.472125    2846 main.go:141] libmachine: Making call to close driver server
I0920 10:02:47.472132    2846 main.go:141] libmachine: (functional-477000) Calling .Close
I0920 10:02:47.472282    2846 main.go:141] libmachine: Successfully made call to close driver server
I0920 10:02:47.472290    2846 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 10:02:47.472297    2846 main.go:141] libmachine: Making call to close driver server
I0920 10:02:47.472305    2846 main.go:141] libmachine: (functional-477000) Calling .Close
I0920 10:02:47.472304    2846 main.go:141] libmachine: (functional-477000) DBG | Closing plugin on server side
I0920 10:02:47.472489    2846 main.go:141] libmachine: (functional-477000) DBG | Closing plugin on server side
I0920 10:02:47.472491    2846 main.go:141] libmachine: Successfully made call to close driver server
I0920 10:02:47.472500    2846 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.441542267s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-477000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 image load --daemon gcr.io/google-containers/addon-resizer:functional-477000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-477000 image load --daemon gcr.io/google-containers/addon-resizer:functional-477000 --alsologtostderr: (3.358728006s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 image load --daemon gcr.io/google-containers/addon-resizer:functional-477000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-477000 image load --daemon gcr.io/google-containers/addon-resizer:functional-477000 --alsologtostderr: (1.8514847s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
2023/09/20 10:02:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.009855584s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-477000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 image load --daemon gcr.io/google-containers/addon-resizer:functional-477000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-477000 image load --daemon gcr.io/google-containers/addon-resizer:functional-477000 --alsologtostderr: (2.591092491s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.80s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-477000 docker-env) && out/minikube-darwin-amd64 status -p functional-477000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-477000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 image save gcr.io/google-containers/addon-resizer:functional-477000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-477000 image save gcr.io/google-containers/addon-resizer:functional-477000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.070422961s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 image rm gcr.io/google-containers/addon-resizer:functional-477000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-477000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.96613223s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-477000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-477000 image save --daemon gcr.io/google-containers/addon-resizer:functional-477000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-477000 image save --daemon gcr.io/google-containers/addon-resizer:functional-477000 --alsologtostderr: (1.258913696s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-477000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-477000
--- PASS: TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-477000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-477000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (37.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-088000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-088000 --driver=hyperkit : (37.873082903s)
--- PASS: TestImageBuild/serial/Setup (37.87s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.21s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-088000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-088000: (1.213629899s)
--- PASS: TestImageBuild/serial/NormalBuild (1.21s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.66s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-088000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.66s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.21s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-088000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.21s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.19s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-088000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.19s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (66.01s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-351000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-amd64 start -p ingress-addon-legacy-351000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit : (1m6.011928705s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (66.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (19.31s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-351000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-351000 addons enable ingress --alsologtostderr -v=5: (19.307130939s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (19.31s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.57s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-351000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.57s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (36.54s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-351000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-351000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.673689097s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-351000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-351000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6bd74982-371b-4b08-8a83-e0e158d9ad1d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6bd74982-371b-4b08-8a83-e0e158d9ad1d] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.009784522s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-351000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-351000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-351000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.64.6
addons_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-351000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-351000 addons disable ingress-dns --alsologtostderr -v=1: (1.650980755s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-351000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-351000 addons disable ingress --alsologtostderr -v=1: (7.280935563s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (36.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (65.3s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-460000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E0920 10:06:45.963091    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
E0920 10:06:45.970125    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
E0920 10:06:45.980340    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
E0920 10:06:46.000595    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
E0920 10:06:46.041304    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
E0920 10:06:46.121492    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
E0920 10:06:46.282481    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
E0920 10:06:46.603612    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
E0920 10:06:47.245301    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
E0920 10:06:48.526000    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
E0920 10:06:51.088017    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
E0920 10:06:56.208851    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-460000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (1m5.294872597s)
--- PASS: TestJSONOutput/start/Command (65.30s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.44s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-460000 --output=json --user=testUser
E0920 10:07:06.449995    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
--- PASS: TestJSONOutput/pause/Command (0.44s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.41s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-460000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.41s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.14s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-460000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-460000 --output=json --user=testUser: (8.140117784s)
--- PASS: TestJSONOutput/stop/Command (8.14s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.7s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-103000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-103000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (340.919794ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ea42d8e2-4307-43e1-a611-02ec9b7e3553","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-103000] minikube v1.31.2 on Darwin 13.5.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"14f5ed0f-0f24-4b8c-881d-d646b7dace2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15927"}}
	{"specversion":"1.0","id":"b1430d22-f618-4f60-8fff-787980846d0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15927-1321/kubeconfig"}}
	{"specversion":"1.0","id":"711c9acb-a407-4e3b-bed3-b6fc7b68457e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"0ead46a1-292c-4e81-ab26-94a04b85aa38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c4070202-b87f-4583-a66d-443ed9048158","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15927-1321/.minikube"}}
	{"specversion":"1.0","id":"e81a1a57-48d3-4782-92d6-f056e5410b97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a650a5a8-e6f1-4d15-ad9d-ae4612fbf85c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-103000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-103000
--- PASS: TestErrorJSONOutput (0.70s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (16.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-012000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-012000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : (15.304570927s)
--- PASS: TestMountStart/serial/StartWithMountFirst (16.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-012000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-012000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (16.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-024000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit 
E0920 10:08:07.894477    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-024000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit : (15.155464685s)
--- PASS: TestMountStart/serial/StartWithMountSecond (16.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-024000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-024000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.31s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-012000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-012000 --alsologtostderr -v=5: (2.311798153s)
--- PASS: TestMountStart/serial/DeleteFirst (2.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-024000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-024000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-024000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-024000: (2.230837961s)
--- PASS: TestMountStart/serial/Stop (2.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (16.98s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-024000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-024000: (15.978569791s)
--- PASS: TestMountStart/serial/RestartStopped (16.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-024000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-024000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (96.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-139000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0920 10:09:29.815468    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-139000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m35.962303767s)
multinode_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (96.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-139000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-139000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-139000 -- rollout status deployment/busybox: (3.125684678s)
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-139000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-139000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-139000 -- exec busybox-5bc68d56bd-8qjmw -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-139000 -- exec busybox-5bc68d56bd-vznpl -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-139000 -- exec busybox-5bc68d56bd-8qjmw -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-139000 -- exec busybox-5bc68d56bd-vznpl -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-139000 -- exec busybox-5bc68d56bd-8qjmw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-139000 -- exec busybox-5bc68d56bd-vznpl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.55s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-139000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-139000 -- exec busybox-5bc68d56bd-8qjmw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-139000 -- exec busybox-5bc68d56bd-8qjmw -- sh -c "ping -c 1 192.168.64.1"
E0920 10:10:19.008493    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
E0920 10:10:19.013780    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
E0920 10:10:19.025637    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
E0920 10:10:19.046517    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
E0920 10:10:19.087367    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-139000 -- exec busybox-5bc68d56bd-vznpl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0920 10:10:19.168619    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-139000 -- exec busybox-5bc68d56bd-vznpl -- sh -c "ping -c 1 192.168.64.1"
E0920 10:10:19.329345    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (32.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-139000 -v 3 --alsologtostderr
E0920 10:10:19.650055    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
E0920 10:10:20.291387    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
E0920 10:10:21.572419    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
E0920 10:10:24.133041    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
E0920 10:10:29.253531    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
E0920 10:10:39.493740    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-139000 -v 3 --alsologtostderr: (32.4260731s)
multinode_test.go:116: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (32.71s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (4.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 cp testdata/cp-test.txt multinode-139000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 ssh -n multinode-139000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 cp multinode-139000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile3723444776/001/cp-test_multinode-139000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 ssh -n multinode-139000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 cp multinode-139000:/home/docker/cp-test.txt multinode-139000-m02:/home/docker/cp-test_multinode-139000_multinode-139000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 ssh -n multinode-139000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 ssh -n multinode-139000-m02 "sudo cat /home/docker/cp-test_multinode-139000_multinode-139000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 cp multinode-139000:/home/docker/cp-test.txt multinode-139000-m03:/home/docker/cp-test_multinode-139000_multinode-139000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 ssh -n multinode-139000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 ssh -n multinode-139000-m03 "sudo cat /home/docker/cp-test_multinode-139000_multinode-139000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 cp testdata/cp-test.txt multinode-139000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 ssh -n multinode-139000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 cp multinode-139000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile3723444776/001/cp-test_multinode-139000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 ssh -n multinode-139000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 cp multinode-139000-m02:/home/docker/cp-test.txt multinode-139000:/home/docker/cp-test_multinode-139000-m02_multinode-139000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 ssh -n multinode-139000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 ssh -n multinode-139000 "sudo cat /home/docker/cp-test_multinode-139000-m02_multinode-139000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 cp multinode-139000-m02:/home/docker/cp-test.txt multinode-139000-m03:/home/docker/cp-test_multinode-139000-m02_multinode-139000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 ssh -n multinode-139000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 ssh -n multinode-139000-m03 "sudo cat /home/docker/cp-test_multinode-139000-m02_multinode-139000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 cp testdata/cp-test.txt multinode-139000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 ssh -n multinode-139000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 cp multinode-139000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile3723444776/001/cp-test_multinode-139000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 ssh -n multinode-139000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 cp multinode-139000-m03:/home/docker/cp-test.txt multinode-139000:/home/docker/cp-test_multinode-139000-m03_multinode-139000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 ssh -n multinode-139000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 ssh -n multinode-139000 "sudo cat /home/docker/cp-test_multinode-139000-m03_multinode-139000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 cp multinode-139000-m03:/home/docker/cp-test.txt multinode-139000-m02:/home/docker/cp-test_multinode-139000-m03_multinode-139000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 ssh -n multinode-139000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 ssh -n multinode-139000-m02 "sudo cat /home/docker/cp-test_multinode-139000-m03_multinode-139000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (4.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-darwin-amd64 -p multinode-139000 node stop m03: (2.169910523s)
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-139000 status: exit status 7 (225.818713ms)

                                                
                                                
-- stdout --
	multinode-139000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-139000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-139000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-139000 status --alsologtostderr: exit status 7 (223.116733ms)

                                                
                                                
-- stdout --
	multinode-139000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-139000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-139000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:10:59.469235    3687 out.go:296] Setting OutFile to fd 1 ...
	I0920 10:10:59.469499    3687 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0920 10:10:59.469505    3687 out.go:309] Setting ErrFile to fd 2...
	I0920 10:10:59.469509    3687 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0920 10:10:59.469694    3687 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15927-1321/.minikube/bin
	I0920 10:10:59.469902    3687 out.go:303] Setting JSON to false
	I0920 10:10:59.469924    3687 mustload.go:65] Loading cluster: multinode-139000
	I0920 10:10:59.469961    3687 notify.go:220] Checking for updates...
	I0920 10:10:59.470236    3687 config.go:182] Loaded profile config "multinode-139000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0920 10:10:59.470247    3687 status.go:255] checking status of multinode-139000 ...
	I0920 10:10:59.470621    3687 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:10:59.470679    3687 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:10:59.477513    3687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51230
	I0920 10:10:59.477855    3687 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:10:59.478262    3687 main.go:141] libmachine: Using API Version  1
	I0920 10:10:59.478276    3687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:10:59.478515    3687 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:10:59.478617    3687 main.go:141] libmachine: (multinode-139000) Calling .GetState
	I0920 10:10:59.478707    3687 main.go:141] libmachine: (multinode-139000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 10:10:59.478758    3687 main.go:141] libmachine: (multinode-139000) DBG | hyperkit pid from json: 3394
	I0920 10:10:59.479891    3687 status.go:330] multinode-139000 host status = "Running" (err=<nil>)
	I0920 10:10:59.479911    3687 host.go:66] Checking if "multinode-139000" exists ...
	I0920 10:10:59.480155    3687 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:10:59.480173    3687 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:10:59.486973    3687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51232
	I0920 10:10:59.487311    3687 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:10:59.487672    3687 main.go:141] libmachine: Using API Version  1
	I0920 10:10:59.487694    3687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:10:59.487913    3687 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:10:59.488023    3687 main.go:141] libmachine: (multinode-139000) Calling .GetIP
	I0920 10:10:59.488104    3687 host.go:66] Checking if "multinode-139000" exists ...
	I0920 10:10:59.488345    3687 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:10:59.488371    3687 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:10:59.498461    3687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51234
	I0920 10:10:59.498802    3687 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:10:59.499144    3687 main.go:141] libmachine: Using API Version  1
	I0920 10:10:59.499160    3687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:10:59.499355    3687 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:10:59.499454    3687 main.go:141] libmachine: (multinode-139000) Calling .DriverName
	I0920 10:10:59.499578    3687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 10:10:59.499600    3687 main.go:141] libmachine: (multinode-139000) Calling .GetSSHHostname
	I0920 10:10:59.499676    3687 main.go:141] libmachine: (multinode-139000) Calling .GetSSHPort
	I0920 10:10:59.499751    3687 main.go:141] libmachine: (multinode-139000) Calling .GetSSHKeyPath
	I0920 10:10:59.499831    3687 main.go:141] libmachine: (multinode-139000) Calling .GetSSHUsername
	I0920 10:10:59.499922    3687 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/multinode-139000/id_rsa Username:docker}
	I0920 10:10:59.536986    3687 ssh_runner.go:195] Run: systemctl --version
	I0920 10:10:59.540465    3687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 10:10:59.549190    3687 kubeconfig.go:92] found "multinode-139000" server: "https://192.168.64.12:8443"
	I0920 10:10:59.549212    3687 api_server.go:166] Checking apiserver status ...
	I0920 10:10:59.549249    3687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:10:59.557197    3687 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1943/cgroup
	I0920 10:10:59.563215    3687 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/podb585772c6f2982e4f5d4dddd33a541e6/d9821938fde04257717534c0ba4e59a77e13da281339ca1daff9c4a478141121"
	I0920 10:10:59.563267    3687 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podb585772c6f2982e4f5d4dddd33a541e6/d9821938fde04257717534c0ba4e59a77e13da281339ca1daff9c4a478141121/freezer.state
	I0920 10:10:59.568910    3687 api_server.go:204] freezer state: "THAWED"
	I0920 10:10:59.568925    3687 api_server.go:253] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
	I0920 10:10:59.572161    3687 api_server.go:279] https://192.168.64.12:8443/healthz returned 200:
	ok
	I0920 10:10:59.572173    3687 status.go:421] multinode-139000 apiserver status = Running (err=<nil>)
	I0920 10:10:59.572179    3687 status.go:257] multinode-139000 status: &{Name:multinode-139000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 10:10:59.572194    3687 status.go:255] checking status of multinode-139000-m02 ...
	I0920 10:10:59.572450    3687 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:10:59.572471    3687 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:10:59.579559    3687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51238
	I0920 10:10:59.579939    3687 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:10:59.580283    3687 main.go:141] libmachine: Using API Version  1
	I0920 10:10:59.580296    3687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:10:59.580498    3687 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:10:59.580592    3687 main.go:141] libmachine: (multinode-139000-m02) Calling .GetState
	I0920 10:10:59.580674    3687 main.go:141] libmachine: (multinode-139000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 10:10:59.580738    3687 main.go:141] libmachine: (multinode-139000-m02) DBG | hyperkit pid from json: 3413
	I0920 10:10:59.581909    3687 status.go:330] multinode-139000-m02 host status = "Running" (err=<nil>)
	I0920 10:10:59.581918    3687 host.go:66] Checking if "multinode-139000-m02" exists ...
	I0920 10:10:59.582170    3687 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:10:59.582193    3687 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:10:59.589041    3687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51240
	I0920 10:10:59.589370    3687 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:10:59.589693    3687 main.go:141] libmachine: Using API Version  1
	I0920 10:10:59.589703    3687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:10:59.589907    3687 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:10:59.590009    3687 main.go:141] libmachine: (multinode-139000-m02) Calling .GetIP
	I0920 10:10:59.590096    3687 host.go:66] Checking if "multinode-139000-m02" exists ...
	I0920 10:10:59.590360    3687 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:10:59.590385    3687 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:10:59.597097    3687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51242
	I0920 10:10:59.597438    3687 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:10:59.597797    3687 main.go:141] libmachine: Using API Version  1
	I0920 10:10:59.597813    3687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:10:59.598013    3687 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:10:59.598120    3687 main.go:141] libmachine: (multinode-139000-m02) Calling .DriverName
	I0920 10:10:59.598245    3687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 10:10:59.598257    3687 main.go:141] libmachine: (multinode-139000-m02) Calling .GetSSHHostname
	I0920 10:10:59.598327    3687 main.go:141] libmachine: (multinode-139000-m02) Calling .GetSSHPort
	I0920 10:10:59.598409    3687 main.go:141] libmachine: (multinode-139000-m02) Calling .GetSSHKeyPath
	I0920 10:10:59.598485    3687 main.go:141] libmachine: (multinode-139000-m02) Calling .GetSSHUsername
	I0920 10:10:59.598554    3687 sshutil.go:53] new ssh client: &{IP:192.168.64.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15927-1321/.minikube/machines/multinode-139000-m02/id_rsa Username:docker}
	I0920 10:10:59.634224    3687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 10:10:59.643138    3687 status.go:257] multinode-139000-m02 status: &{Name:multinode-139000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0920 10:10:59.643154    3687 status.go:255] checking status of multinode-139000-m03 ...
	I0920 10:10:59.643431    3687 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:10:59.643454    3687 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:10:59.650383    3687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51245
	I0920 10:10:59.650737    3687 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:10:59.651077    3687 main.go:141] libmachine: Using API Version  1
	I0920 10:10:59.651094    3687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:10:59.651334    3687 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:10:59.651448    3687 main.go:141] libmachine: (multinode-139000-m03) Calling .GetState
	I0920 10:10:59.651525    3687 main.go:141] libmachine: (multinode-139000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 10:10:59.651586    3687 main.go:141] libmachine: (multinode-139000-m03) DBG | hyperkit pid from json: 3480
	I0920 10:10:59.652702    3687 main.go:141] libmachine: (multinode-139000-m03) DBG | hyperkit pid 3480 missing from process table
	I0920 10:10:59.652720    3687 status.go:330] multinode-139000-m03 host status = "Stopped" (err=<nil>)
	I0920 10:10:59.652727    3687 status.go:343] host is not running, skipping remaining checks
	I0920 10:10:59.652732    3687 status.go:257] multinode-139000-m03 status: &{Name:multinode-139000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.62s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (27.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 node start m03 --alsologtostderr
E0920 10:10:59.973940    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-darwin-amd64 -p multinode-139000 node start m03 --alsologtostderr: (26.815954847s)
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (27.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (124.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-139000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-139000
E0920 10:11:40.935413    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-139000: (18.410058965s)
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-139000 --wait=true -v=8 --alsologtostderr
E0920 10:11:45.968007    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
E0920 10:12:13.656004    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
E0920 10:13:02.857123    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-139000 --wait=true -v=8 --alsologtostderr: (1m45.604917726s)
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-139000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (124.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-darwin-amd64 -p multinode-139000 node delete m03: (2.601075893s)
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 stop
multinode_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p multinode-139000 stop: (16.324541615s)
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-139000 status: exit status 7 (58.933261ms)

                                                
                                                
-- stdout --
	multinode-139000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-139000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-139000 status --alsologtostderr: exit status 7 (58.795619ms)

                                                
                                                
-- stdout --
	multinode-139000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-139000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:13:50.261616    3801 out.go:296] Setting OutFile to fd 1 ...
	I0920 10:13:50.261889    3801 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0920 10:13:50.261895    3801 out.go:309] Setting ErrFile to fd 2...
	I0920 10:13:50.261899    3801 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0920 10:13:50.262075    3801 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15927-1321/.minikube/bin
	I0920 10:13:50.262264    3801 out.go:303] Setting JSON to false
	I0920 10:13:50.262286    3801 mustload.go:65] Loading cluster: multinode-139000
	I0920 10:13:50.262323    3801 notify.go:220] Checking for updates...
	I0920 10:13:50.262617    3801 config.go:182] Loaded profile config "multinode-139000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0920 10:13:50.262629    3801 status.go:255] checking status of multinode-139000 ...
	I0920 10:13:50.263044    3801 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:13:50.263107    3801 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:13:50.269827    3801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51426
	I0920 10:13:50.270147    3801 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:13:50.270635    3801 main.go:141] libmachine: Using API Version  1
	I0920 10:13:50.270648    3801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:13:50.270878    3801 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:13:50.270978    3801 main.go:141] libmachine: (multinode-139000) Calling .GetState
	I0920 10:13:50.271062    3801 main.go:141] libmachine: (multinode-139000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 10:13:50.271131    3801 main.go:141] libmachine: (multinode-139000) DBG | hyperkit pid from json: 3741
	I0920 10:13:50.272024    3801 status.go:330] multinode-139000 host status = "Stopped" (err=<nil>)
	I0920 10:13:50.272036    3801 status.go:343] host is not running, skipping remaining checks
	I0920 10:13:50.272043    3801 main.go:141] libmachine: (multinode-139000) DBG | hyperkit pid 3741 missing from process table
	I0920 10:13:50.272042    3801 status.go:257] multinode-139000 status: &{Name:multinode-139000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 10:13:50.272062    3801 status.go:255] checking status of multinode-139000-m02 ...
	I0920 10:13:50.272292    3801 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0920 10:13:50.272313    3801 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0920 10:13:50.279118    3801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51428
	I0920 10:13:50.279419    3801 main.go:141] libmachine: () Calling .GetVersion
	I0920 10:13:50.279750    3801 main.go:141] libmachine: Using API Version  1
	I0920 10:13:50.279765    3801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 10:13:50.279963    3801 main.go:141] libmachine: () Calling .GetMachineName
	I0920 10:13:50.280061    3801 main.go:141] libmachine: (multinode-139000-m02) Calling .GetState
	I0920 10:13:50.280143    3801 main.go:141] libmachine: (multinode-139000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0920 10:13:50.280206    3801 main.go:141] libmachine: (multinode-139000-m02) DBG | hyperkit pid from json: 3758
	I0920 10:13:50.281047    3801 main.go:141] libmachine: (multinode-139000-m02) DBG | hyperkit pid 3758 missing from process table
	I0920 10:13:50.281073    3801 status.go:330] multinode-139000-m02 host status = "Stopped" (err=<nil>)
	I0920 10:13:50.281079    3801 status.go:343] host is not running, skipping remaining checks
	I0920 10:13:50.281084    3801 status.go:257] multinode-139000-m02 status: &{Name:multinode-139000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (118.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-139000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0920 10:15:19.009165    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
E0920 10:15:46.698281    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-139000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (1m57.841434927s)
multinode_test.go:360: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-139000 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (118.16s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-139000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-139000-m02 --driver=hyperkit 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-139000-m02 --driver=hyperkit : exit status 14 (411.610873ms)

                                                
                                                
-- stdout --
	* [multinode-139000-m02] minikube v1.31.2 on Darwin 13.5.2
	  - MINIKUBE_LOCATION=15927
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15927-1321/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15927-1321/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-139000-m02' is duplicated with machine name 'multinode-139000-m02' in profile 'multinode-139000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-139000-m03 --driver=hyperkit 
multinode_test.go:460: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-139000-m03 --driver=hyperkit : (38.46914655s)
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-139000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-139000: exit status 80 (253.086468ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-139000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-139000-m03 already exists in multinode-139000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-139000-m03
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-139000-m03: (5.247369936s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.42s)

                                                
                                    
x
+
TestPreload (165.83s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-824000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E0920 10:16:45.967055    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-824000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m9.720974001s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-824000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-824000 image pull gcr.io/k8s-minikube/busybox: (1.170114516s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-824000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-824000: (8.226215333s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-824000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-824000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (1m21.334185025s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-824000 image list
helpers_test.go:175: Cleaning up "test-preload-824000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-824000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-824000: (5.244887614s)
--- PASS: TestPreload (165.83s)

                                                
                                    
x
+
TestScheduledStopUnix (106.19s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-292000 --memory=2048 --driver=hyperkit 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-292000 --memory=2048 --driver=hyperkit : (34.834292562s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-292000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-292000 -n scheduled-stop-292000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-292000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-292000 --cancel-scheduled
E0920 10:20:19.010494    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-292000 -n scheduled-stop-292000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-292000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-292000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-292000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-292000: exit status 7 (53.668476ms)

                                                
                                                
-- stdout --
	scheduled-stop-292000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-292000 -n scheduled-stop-292000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-292000 -n scheduled-stop-292000: exit status 7 (49.335035ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-292000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-292000
--- PASS: TestScheduledStopUnix (106.19s)

                                                
                                    
x
+
TestSkaffold (108.29s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe1631431672 version
skaffold_test.go:63: skaffold version: v2.7.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-273000 --memory=2600 --driver=hyperkit 
E0920 10:21:45.967422    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-273000 --memory=2600 --driver=hyperkit : (34.603418043s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe1631431672 run --minikube-profile skaffold-273000 --kube-context skaffold-273000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe1631431672 run --minikube-profile skaffold-273000 --kube-context skaffold-273000 --status-check=true --port-forward=false --interactive=false: (56.332606915s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-d77c9f9db-2t9m8" [c785b5a9-beef-4a12-9082-9580d8aa581b] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.010698673s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7f9f5c9c8f-jjgxl" [14d012af-95c6-402b-8f8d-9fd0279c9dc2] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.008523282s
helpers_test.go:175: Cleaning up "skaffold-273000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-273000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-273000: (5.253128884s)
--- PASS: TestSkaffold (108.29s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (160.96s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.1100503172.exe start -p running-upgrade-694000 --memory=2200 --vm-driver=hyperkit 
E0920 10:26:42.055798    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
E0920 10:26:45.964357    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.1100503172.exe start -p running-upgrade-694000 --memory=2200 --vm-driver=hyperkit : (1m28.413279639s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-694000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0920 10:27:48.286338    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/skaffold-273000/client.crt: no such file or directory
E0920 10:27:48.292818    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/skaffold-273000/client.crt: no such file or directory
E0920 10:27:48.303148    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/skaffold-273000/client.crt: no such file or directory
E0920 10:27:48.323732    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/skaffold-273000/client.crt: no such file or directory
E0920 10:27:48.364993    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/skaffold-273000/client.crt: no such file or directory
E0920 10:27:48.445105    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/skaffold-273000/client.crt: no such file or directory
E0920 10:27:48.606609    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/skaffold-273000/client.crt: no such file or directory
E0920 10:27:48.928208    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/skaffold-273000/client.crt: no such file or directory
E0920 10:27:49.570520    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/skaffold-273000/client.crt: no such file or directory
E0920 10:27:50.850925    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/skaffold-273000/client.crt: no such file or directory
E0920 10:27:53.411342    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/skaffold-273000/client.crt: no such file or directory
E0920 10:27:58.532823    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/skaffold-273000/client.crt: no such file or directory
version_upgrade_test.go:143: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-694000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m6.293923861s)
helpers_test.go:175: Cleaning up "running-upgrade-694000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-694000
E0920 10:28:08.773544    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/skaffold-273000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-694000: (5.247197766s)
--- PASS: TestRunningBinaryUpgrade (160.96s)

                                                
                                    
x
+
TestKubernetesUpgrade (141.34s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-719000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:235: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-719000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit : (1m16.611242452s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-719000
version_upgrade_test.go:240: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-719000: (2.165100853s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-719000 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-719000 status --format={{.Host}}: exit status 7 (50.477286ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-719000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-719000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=hyperkit : (32.240118121s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-719000 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-719000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-719000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit : exit status 106 (578.489779ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-719000] minikube v1.31.2 on Darwin 13.5.2
	  - MINIKUBE_LOCATION=15927
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15927-1321/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15927-1321/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-719000
	    minikube start -p kubernetes-upgrade-719000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7190002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.2, by running:
	    
	    minikube start -p kubernetes-upgrade-719000 --kubernetes-version=v1.28.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-719000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:288: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-719000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=hyperkit : (26.136712839s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-719000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-719000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-719000: (3.514868861s)
--- PASS: TestKubernetesUpgrade (141.34s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.31s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
E0920 10:23:09.017767    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
* minikube v1.31.2 on darwin
- MINIKUBE_LOCATION=15927
- KUBECONFIG=/Users/jenkins/minikube-integration/15927-1321/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current893291013/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current893291013/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current893291013/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current893291013/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.31s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.57s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.2 on darwin
- MINIKUBE_LOCATION=15927
- KUBECONFIG=/Users/jenkins/minikube-integration/15927-1321/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2611181725/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2611181725/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2611181725/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2611181725/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (152.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.646099983.exe start -p stopped-upgrade-369000 --memory=2200 --vm-driver=hyperkit 
E0920 10:29:10.216052    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/skaffold-273000/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.646099983.exe start -p stopped-upgrade-369000 --memory=2200 --vm-driver=hyperkit : (1m25.201100429s)
version_upgrade_test.go:205: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.646099983.exe -p stopped-upgrade-369000 stop
E0920 10:30:19.008359    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
version_upgrade_test.go:205: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.646099983.exe -p stopped-upgrade-369000 stop: (8.075903856s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-369000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:211: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-369000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (59.461742424s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (152.74s)

                                                
                                    
x
+
TestPause/serial/Start (60.41s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-524000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
E0920 10:30:32.136898    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/skaffold-273000/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-524000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : (1m0.41424495s)
--- PASS: TestPause/serial/Start (60.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-369000
version_upgrade_test.go:219: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-369000: (2.512902976s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-357000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-357000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (542.666882ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-357000] minikube v1.31.2 on Darwin 13.5.2
	  - MINIKUBE_LOCATION=15927
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15927-1321/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15927-1321/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-357000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-357000 --driver=hyperkit : (35.743095022s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-357000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.89s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (50.78s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-524000 --alsologtostderr -v=1 --driver=hyperkit 
E0920 10:31:45.965021    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-524000 --alsologtostderr -v=1 --driver=hyperkit : (50.764836133s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (50.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-357000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-357000 --no-kubernetes --driver=hyperkit : (4.753235009s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-357000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-357000 status -o json: exit status 2 (127.97395ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-357000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-357000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-357000: (2.416988671s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (18.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-357000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-357000 --no-kubernetes --driver=hyperkit : (18.10696059s)
--- PASS: TestNoKubernetes/serial/Start (18.11s)

                                                
                                    
x
+
TestPause/serial/Pause (0.49s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-524000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.49s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.14s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-524000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-524000 --output=json --layout=cluster: exit status 2 (139.313699ms)

                                                
                                                
-- stdout --
	{"Name":"pause-524000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-524000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.14s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.47s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-524000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.47s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.57s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-524000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.57s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.25s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-524000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-524000 --alsologtostderr -v=5: (5.247037101s)
--- PASS: TestPause/serial/DeletePaused (5.25s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.16s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (52.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-704000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-704000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : (52.118527848s)
--- PASS: TestNetworkPlugins/group/auto/Start (52.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-357000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-357000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (118.706873ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-357000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-357000: (2.189301228s)
--- PASS: TestNoKubernetes/serial/Stop (2.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-357000 --driver=hyperkit 
E0920 10:32:48.287105    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/skaffold-273000/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-357000 --driver=hyperkit : (21.455299913s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-357000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-357000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (116.564527ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (59.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-704000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
E0920 10:33:15.979286    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/skaffold-273000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-704000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (59.01449453s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (59.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-704000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-704000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-f9pjm" [ee7b47c6-4f43-49b6-89b7-df5fb125b223] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-f9pjm" [ee7b47c6-4f43-49b6-89b7-df5fb125b223] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.008891326s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-704000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-704000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-704000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-704000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-704000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : (1m17.855817023s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-9gssr" [f3a4acfb-5853-4ca9-a46a-9bdc3043a607] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.014657304s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-704000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-704000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7pgdm" [f8611181-e0d1-4c46-8b41-7ec82ef98425] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7pgdm" [f8611181-e0d1-4c46-8b41-7ec82ef98425] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.007055845s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-704000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-704000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-704000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-704000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-704000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (59.003406939s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4nn7s" [f906cd3b-9f9e-411e-a41a-8d8d0ed20c8e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.017410591s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-704000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-704000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-p7wqv" [edeb81dd-f737-4a1a-ad82-9e7b4dadfdc4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 10:35:19.008617    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-p7wqv" [edeb81dd-f737-4a1a-ad82-9e7b4dadfdc4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.008766602s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-704000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-704000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-704000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-704000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-704000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5ktdt" [adcdc5ff-66dd-493e-b477-9e8f4ae8a45b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5ktdt" [adcdc5ff-66dd-493e-b477-9e8f4ae8a45b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.009558534s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-704000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-704000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-704000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (51.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-704000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-704000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : (51.769381765s)
--- PASS: TestNetworkPlugins/group/false/Start (51.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (49.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-704000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-704000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : (49.521895653s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (49.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-704000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-704000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4j68z" [adac34d7-1884-4936-8136-1649df456e1e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4j68z" [adac34d7-1884-4936-8136-1649df456e1e] Running
E0920 10:36:45.966210    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.008836098s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-704000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-704000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-704000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-704000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-704000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ntl4x" [d0b41533-8e19-4d9f-842b-4c8f64362b51] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ntl4x" [d0b41533-8e19-4d9f-842b-4c8f64362b51] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.010453932s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-704000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-704000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-704000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-704000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-704000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (58.720385881s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (47.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-704000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
E0920 10:37:48.286919    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/skaffold-273000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-704000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (47.748956754s)
--- PASS: TestNetworkPlugins/group/bridge/Start (47.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-mjctp" [a678821d-a756-4e6a-879a-86580d4fbd6b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.011453024s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-704000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-704000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8kmft" [34d67a4b-d43e-4f9c-88af-da8af8a58012] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8kmft" [34d67a4b-d43e-4f9c-88af-da8af8a58012] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.008095687s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-704000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-704000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-c4t5h" [cfef1dcc-4467-4048-9f49-e3c349ced58c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-c4t5h" [cfef1dcc-4467-4048-9f49-e3c349ced58c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.008694122s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-704000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (25.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-704000 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-704000 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.121963176s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-704000 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context bridge-704000 exec deployment/netcat -- nslookup kubernetes.default: (10.122595707s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (25.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-704000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0920 10:38:21.368443    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/auto-704000/client.crt: no such file or directory
E0920 10:38:21.373814    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/auto-704000/client.crt: no such file or directory
E0920 10:38:21.383946    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/auto-704000/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-704000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0920 10:38:21.404938    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/auto-704000/client.crt: no such file or directory
E0920 10:38:21.446635    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/auto-704000/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (78.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-704000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
E0920 10:38:41.854153    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/auto-704000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-704000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (1m18.817700379s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (78.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-704000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-704000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (151.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-770000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
E0920 10:39:07.790664    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kindnet-704000/client.crt: no such file or directory
E0920 10:39:18.031879    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kindnet-704000/client.crt: no such file or directory
E0920 10:39:38.513284    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kindnet-704000/client.crt: no such file or directory
E0920 10:39:43.297088    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/auto-704000/client.crt: no such file or directory
E0920 10:39:49.091146    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-770000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (2m31.007398299s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (151.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-704000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (14.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-704000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hlbxr" [af3c4c05-464f-4082-87bc-00f467138eed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hlbxr" [af3c4c05-464f-4082-87bc-00f467138eed] Running
E0920 10:40:08.110584    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/calico-704000/client.crt: no such file or directory
E0920 10:40:08.117035    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/calico-704000/client.crt: no such file or directory
E0920 10:40:08.128175    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/calico-704000/client.crt: no such file or directory
E0920 10:40:08.149395    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/calico-704000/client.crt: no such file or directory
E0920 10:40:08.190403    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/calico-704000/client.crt: no such file or directory
E0920 10:40:08.270512    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/calico-704000/client.crt: no such file or directory
E0920 10:40:08.431106    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/calico-704000/client.crt: no such file or directory
E0920 10:40:08.751317    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/calico-704000/client.crt: no such file or directory
E0920 10:40:09.393391    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/calico-704000/client.crt: no such file or directory
E0920 10:40:10.673840    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/calico-704000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 14.007937713s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (14.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-704000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-704000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-704000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)
E0920 10:55:19.099683    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
E0920 10:55:20.617223    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kindnet-704000/client.crt: no such file or directory
E0920 10:55:31.665939    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/custom-flannel-704000/client.crt: no such file or directory
E0920 10:56:27.449012    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/no-preload-948000/client.crt: no such file or directory
E0920 10:56:29.110523    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
E0920 10:56:31.189099    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/calico-704000/client.crt: no such file or directory
E0920 10:56:35.241052    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/false-704000/client.crt: no such file or directory
E0920 10:56:35.366304    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/old-k8s-version-770000/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (58.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-948000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.28.2
E0920 10:40:31.648675    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/custom-flannel-704000/client.crt: no such file or directory
E0920 10:40:31.653972    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/custom-flannel-704000/client.crt: no such file or directory
E0920 10:40:31.664245    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/custom-flannel-704000/client.crt: no such file or directory
E0920 10:40:31.684694    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/custom-flannel-704000/client.crt: no such file or directory
E0920 10:40:31.725255    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/custom-flannel-704000/client.crt: no such file or directory
E0920 10:40:31.807223    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/custom-flannel-704000/client.crt: no such file or directory
E0920 10:40:31.968590    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/custom-flannel-704000/client.crt: no such file or directory
E0920 10:40:32.289607    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/custom-flannel-704000/client.crt: no such file or directory
E0920 10:40:32.931015    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/custom-flannel-704000/client.crt: no such file or directory
E0920 10:40:34.212487    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/custom-flannel-704000/client.crt: no such file or directory
E0920 10:40:36.773451    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/custom-flannel-704000/client.crt: no such file or directory
E0920 10:40:41.893766    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/custom-flannel-704000/client.crt: no such file or directory
E0920 10:40:49.079720    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/calico-704000/client.crt: no such file or directory
E0920 10:40:52.134191    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/custom-flannel-704000/client.crt: no such file or directory
E0920 10:41:05.219499    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/auto-704000/client.crt: no such file or directory
E0920 10:41:12.614756    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/custom-flannel-704000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-948000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.28.2: (58.569550108s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (58.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-948000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f2372afd-5abd-4a2f-92a6-bf4eaa4f937b] Pending
helpers_test.go:344: "busybox" [f2372afd-5abd-4a2f-92a6-bf4eaa4f937b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0920 10:41:30.041454    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/calico-704000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [f2372afd-5abd-4a2f-92a6-bf4eaa4f937b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.015656219s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-948000 exec busybox -- /bin/sh -c "ulimit -n"
E0920 10:41:36.505433    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/false-704000/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-770000 create -f testdata/busybox.yaml
E0920 10:41:35.224384    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/false-704000/client.crt: no such file or directory
E0920 10:41:35.229460    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/false-704000/client.crt: no such file or directory
E0920 10:41:35.239648    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/false-704000/client.crt: no such file or directory
E0920 10:41:35.259877    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/false-704000/client.crt: no such file or directory
E0920 10:41:35.300402    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/false-704000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1bb0d65c-3c95-43be-98fd-df479fd8b1bb] Pending
helpers_test.go:344: "busybox" [1bb0d65c-3c95-43be-98fd-df479fd8b1bb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0920 10:41:35.382083    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/false-704000/client.crt: no such file or directory
E0920 10:41:35.544145    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/false-704000/client.crt: no such file or directory
E0920 10:41:35.864817    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/false-704000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [1bb0d65c-3c95-43be-98fd-df479fd8b1bb] Running
E0920 10:41:40.348118    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/false-704000/client.crt: no such file or directory
E0920 10:41:41.400009    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kindnet-704000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.01938147s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-770000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-948000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-948000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-948000 --alsologtostderr -v=3
E0920 10:41:37.786441    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/false-704000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-948000 --alsologtostderr -v=3: (8.281106118s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-770000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-770000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-770000 --alsologtostderr -v=3
E0920 10:41:45.470635    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/false-704000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-770000 --alsologtostderr -v=3: (8.226905684s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-948000 -n no-preload-948000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-948000 -n no-preload-948000: exit status 7 (49.861744ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-948000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (301.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-948000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.28.2
E0920 10:41:46.042814    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
E0920 10:41:50.110381    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/enable-default-cni-704000/client.crt: no such file or directory
E0920 10:41:50.116703    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/enable-default-cni-704000/client.crt: no such file or directory
E0920 10:41:50.128309    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/enable-default-cni-704000/client.crt: no such file or directory
E0920 10:41:50.149377    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/enable-default-cni-704000/client.crt: no such file or directory
E0920 10:41:50.189526    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/enable-default-cni-704000/client.crt: no such file or directory
E0920 10:41:50.269988    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/enable-default-cni-704000/client.crt: no such file or directory
E0920 10:41:50.430878    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/enable-default-cni-704000/client.crt: no such file or directory
E0920 10:41:50.751884    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/enable-default-cni-704000/client.crt: no such file or directory
E0920 10:41:51.392192    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/enable-default-cni-704000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-948000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.28.2: (5m1.126961433s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-948000 -n no-preload-948000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (301.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-770000 -n old-k8s-version-770000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-770000 -n old-k8s-version-770000: exit status 7 (50.609629ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-770000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (494.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-770000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
E0920 10:41:52.672360    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/enable-default-cni-704000/client.crt: no such file or directory
E0920 10:41:53.577206    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/custom-flannel-704000/client.crt: no such file or directory
E0920 10:41:55.233563    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/enable-default-cni-704000/client.crt: no such file or directory
E0920 10:41:55.711339    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/false-704000/client.crt: no such file or directory
E0920 10:42:00.355123    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/enable-default-cni-704000/client.crt: no such file or directory
E0920 10:42:10.595710    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/enable-default-cni-704000/client.crt: no such file or directory
E0920 10:42:16.191951    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/false-704000/client.crt: no such file or directory
E0920 10:42:31.077538    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/enable-default-cni-704000/client.crt: no such file or directory
E0920 10:42:48.365212    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/skaffold-273000/client.crt: no such file or directory
E0920 10:42:51.971478    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/calico-704000/client.crt: no such file or directory
E0920 10:42:57.153087    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/false-704000/client.crt: no such file or directory
E0920 10:43:03.815102    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/flannel-704000/client.crt: no such file or directory
E0920 10:43:03.820853    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/flannel-704000/client.crt: no such file or directory
E0920 10:43:03.831607    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/flannel-704000/client.crt: no such file or directory
E0920 10:43:03.852532    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/flannel-704000/client.crt: no such file or directory
E0920 10:43:03.893697    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/flannel-704000/client.crt: no such file or directory
E0920 10:43:03.974879    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/flannel-704000/client.crt: no such file or directory
E0920 10:43:04.135442    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/flannel-704000/client.crt: no such file or directory
E0920 10:43:04.456858    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/flannel-704000/client.crt: no such file or directory
E0920 10:43:05.097912    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/flannel-704000/client.crt: no such file or directory
E0920 10:43:06.378917    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/flannel-704000/client.crt: no such file or directory
E0920 10:43:08.217018    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/bridge-704000/client.crt: no such file or directory
E0920 10:43:08.223419    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/bridge-704000/client.crt: no such file or directory
E0920 10:43:08.235399    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/bridge-704000/client.crt: no such file or directory
E0920 10:43:08.256331    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/bridge-704000/client.crt: no such file or directory
E0920 10:43:08.297774    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/bridge-704000/client.crt: no such file or directory
E0920 10:43:08.379538    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/bridge-704000/client.crt: no such file or directory
E0920 10:43:08.540773    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/bridge-704000/client.crt: no such file or directory
E0920 10:43:08.862122    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/bridge-704000/client.crt: no such file or directory
E0920 10:43:08.940241    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/flannel-704000/client.crt: no such file or directory
E0920 10:43:09.503370    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/bridge-704000/client.crt: no such file or directory
E0920 10:43:10.784508    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/bridge-704000/client.crt: no such file or directory
E0920 10:43:12.038970    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/enable-default-cni-704000/client.crt: no such file or directory
E0920 10:43:13.345014    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/bridge-704000/client.crt: no such file or directory
E0920 10:43:14.061447    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/flannel-704000/client.crt: no such file or directory
E0920 10:43:15.500167    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/custom-flannel-704000/client.crt: no such file or directory
E0920 10:43:18.466338    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/bridge-704000/client.crt: no such file or directory
E0920 10:43:21.374260    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/auto-704000/client.crt: no such file or directory
E0920 10:43:22.136645    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
E0920 10:43:24.302760    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/flannel-704000/client.crt: no such file or directory
E0920 10:43:28.707176    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/bridge-704000/client.crt: no such file or directory
E0920 10:43:44.783389    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/flannel-704000/client.crt: no such file or directory
E0920 10:43:49.063979    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/auto-704000/client.crt: no such file or directory
E0920 10:43:49.188066    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/bridge-704000/client.crt: no such file or directory
E0920 10:43:57.548439    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kindnet-704000/client.crt: no such file or directory
E0920 10:44:11.419018    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/skaffold-273000/client.crt: no such file or directory
E0920 10:44:19.075029    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/false-704000/client.crt: no such file or directory
E0920 10:44:25.243506    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kindnet-704000/client.crt: no such file or directory
E0920 10:44:25.745683    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/flannel-704000/client.crt: no such file or directory
E0920 10:44:30.149004    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/bridge-704000/client.crt: no such file or directory
E0920 10:44:33.961351    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/enable-default-cni-704000/client.crt: no such file or directory
E0920 10:44:57.740460    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kubenet-704000/client.crt: no such file or directory
E0920 10:44:57.746871    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kubenet-704000/client.crt: no such file or directory
E0920 10:44:57.757447    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kubenet-704000/client.crt: no such file or directory
E0920 10:44:57.778519    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kubenet-704000/client.crt: no such file or directory
E0920 10:44:57.819274    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kubenet-704000/client.crt: no such file or directory
E0920 10:44:57.899435    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kubenet-704000/client.crt: no such file or directory
E0920 10:44:58.061124    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kubenet-704000/client.crt: no such file or directory
E0920 10:44:58.382496    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kubenet-704000/client.crt: no such file or directory
E0920 10:44:59.023153    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kubenet-704000/client.crt: no such file or directory
E0920 10:45:00.304998    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kubenet-704000/client.crt: no such file or directory
E0920 10:45:02.866876    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kubenet-704000/client.crt: no such file or directory
E0920 10:45:07.988438    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kubenet-704000/client.crt: no such file or directory
E0920 10:45:08.116140    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/calico-704000/client.crt: no such file or directory
E0920 10:45:18.230051    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kubenet-704000/client.crt: no such file or directory
E0920 10:45:19.088964    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/ingress-addon-legacy-351000/client.crt: no such file or directory
E0920 10:45:31.654451    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/custom-flannel-704000/client.crt: no such file or directory
E0920 10:45:35.816140    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/calico-704000/client.crt: no such file or directory
E0920 10:45:38.710629    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kubenet-704000/client.crt: no such file or directory
E0920 10:45:47.668118    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/flannel-704000/client.crt: no such file or directory
E0920 10:45:52.072470    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/bridge-704000/client.crt: no such file or directory
E0920 10:45:59.344277    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/custom-flannel-704000/client.crt: no such file or directory
E0920 10:46:19.672435    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kubenet-704000/client.crt: no such file or directory
E0920 10:46:35.228253    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/false-704000/client.crt: no such file or directory
E0920 10:46:46.048576    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-770000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (8m14.795958921s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-770000 -n old-k8s-version-770000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (494.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2w96p" [dc562df2-604b-4bc8-8852-4dbdc4a2f3e4] Running
E0920 10:46:50.115792    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/enable-default-cni-704000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011115153s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2w96p" [dc562df2-604b-4bc8-8852-4dbdc4a2f3e4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006965692s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-948000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-948000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (1.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-948000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-948000 -n no-preload-948000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-948000 -n no-preload-948000: exit status 2 (141.372136ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-948000 -n no-preload-948000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-948000 -n no-preload-948000: exit status 2 (145.75837ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-948000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-948000 -n no-preload-948000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-948000 -n no-preload-948000
--- PASS: TestStartStop/group/no-preload/serial/Pause (1.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (46.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-564000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.2
E0920 10:47:17.804828    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/enable-default-cni-704000/client.crt: no such file or directory
E0920 10:47:41.594132    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kubenet-704000/client.crt: no such file or directory
E0920 10:47:48.370795    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/skaffold-273000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-564000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.2: (46.787668038s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (46.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-564000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f4e15b48-3015-4b44-8ff9-6a36e7536612] Pending
helpers_test.go:344: "busybox" [f4e15b48-3015-4b44-8ff9-6a36e7536612] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f4e15b48-3015-4b44-8ff9-6a36e7536612] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.016153496s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-564000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-564000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0920 10:48:03.820694    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/flannel-704000/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-564000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-564000 --alsologtostderr -v=3
E0920 10:48:08.222916    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/bridge-704000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-564000 --alsologtostderr -v=3: (8.231190299s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-564000 -n embed-certs-564000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-564000 -n embed-certs-564000: exit status 7 (51.098844ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-564000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (297.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-564000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.2
E0920 10:48:21.380199    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/auto-704000/client.crt: no such file or directory
E0920 10:48:31.511694    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/flannel-704000/client.crt: no such file or directory
E0920 10:48:35.915833    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/bridge-704000/client.crt: no such file or directory
E0920 10:48:57.554063    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kindnet-704000/client.crt: no such file or directory
E0920 10:49:57.746874    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kubenet-704000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-564000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.2: (4m57.78871523s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-564000 -n embed-certs-564000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (297.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-5hlpg" [74afda2c-c60c-4b3a-af1b-50df907bedfa] Running
E0920 10:50:08.121124    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/calico-704000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012612888s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-5hlpg" [74afda2c-c60c-4b3a-af1b-50df907bedfa] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007192206s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-770000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (1.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-770000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-770000 -n old-k8s-version-770000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-770000 -n old-k8s-version-770000: exit status 2 (149.204314ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-770000 -n old-k8s-version-770000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-770000 -n old-k8s-version-770000: exit status 2 (149.579482ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-770000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-770000 -n old-k8s-version-770000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-770000 -n old-k8s-version-770000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (1.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-931000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.2
E0920 10:50:31.660692    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/custom-flannel-704000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-931000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.2: (51.024112914s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-931000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [829c0937-cdc0-4dbc-9e29-98eade2371a3] Pending
helpers_test.go:344: "busybox" [829c0937-cdc0-4dbc-9e29-98eade2371a3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [829c0937-cdc0-4dbc-9e29-98eade2371a3] Running
E0920 10:51:27.442787    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/no-preload-948000/client.crt: no such file or directory
E0920 10:51:27.448338    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/no-preload-948000/client.crt: no such file or directory
E0920 10:51:27.459667    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/no-preload-948000/client.crt: no such file or directory
E0920 10:51:27.481663    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/no-preload-948000/client.crt: no such file or directory
E0920 10:51:27.523364    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/no-preload-948000/client.crt: no such file or directory
E0920 10:51:27.604623    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/no-preload-948000/client.crt: no such file or directory
E0920 10:51:27.765223    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/no-preload-948000/client.crt: no such file or directory
E0920 10:51:28.087062    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/no-preload-948000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.018873538s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-931000 exec busybox -- /bin/sh -c "ulimit -n"
E0920 10:51:28.727804    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/no-preload-948000/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-931000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-931000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-931000 --alsologtostderr -v=3
E0920 10:51:30.008190    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/no-preload-948000/client.crt: no such file or directory
E0920 10:51:32.569806    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/no-preload-948000/client.crt: no such file or directory
E0920 10:51:35.235730    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/false-704000/client.crt: no such file or directory
E0920 10:51:35.360518    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/old-k8s-version-770000/client.crt: no such file or directory
E0920 10:51:35.367009    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/old-k8s-version-770000/client.crt: no such file or directory
E0920 10:51:35.378718    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/old-k8s-version-770000/client.crt: no such file or directory
E0920 10:51:35.400439    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/old-k8s-version-770000/client.crt: no such file or directory
E0920 10:51:35.440811    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/old-k8s-version-770000/client.crt: no such file or directory
E0920 10:51:35.521620    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/old-k8s-version-770000/client.crt: no such file or directory
E0920 10:51:35.683841    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/old-k8s-version-770000/client.crt: no such file or directory
E0920 10:51:36.004487    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/old-k8s-version-770000/client.crt: no such file or directory
E0920 10:51:36.646838    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/old-k8s-version-770000/client.crt: no such file or directory
E0920 10:51:37.690609    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/no-preload-948000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-931000 --alsologtostderr -v=3: (8.23160371s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-931000 -n default-k8s-diff-port-931000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-931000 -n default-k8s-diff-port-931000: exit status 7 (50.041527ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-931000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0920 10:51:37.927935    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/old-k8s-version-770000/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (299.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-931000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.2
E0920 10:51:40.488851    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/old-k8s-version-770000/client.crt: no such file or directory
E0920 10:51:45.609097    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/old-k8s-version-770000/client.crt: no such file or directory
E0920 10:51:46.054350    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
E0920 10:51:47.931810    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/no-preload-948000/client.crt: no such file or directory
E0920 10:51:50.121886    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/enable-default-cni-704000/client.crt: no such file or directory
E0920 10:51:55.849390    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/old-k8s-version-770000/client.crt: no such file or directory
E0920 10:52:08.413403    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/no-preload-948000/client.crt: no such file or directory
E0920 10:52:16.331232    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/old-k8s-version-770000/client.crt: no such file or directory
E0920 10:52:48.376149    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/skaffold-273000/client.crt: no such file or directory
E0920 10:52:49.374596    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/no-preload-948000/client.crt: no such file or directory
E0920 10:52:57.293419    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/old-k8s-version-770000/client.crt: no such file or directory
E0920 10:53:03.825410    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/flannel-704000/client.crt: no such file or directory
E0920 10:53:08.229653    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/bridge-704000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-931000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.2: (4m59.494100712s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-931000 -n default-k8s-diff-port-931000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (299.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-z9brm" [b952a4e8-e5c2-4fcd-9c76-e21215a2cabf] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012435492s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-z9brm" [b952a4e8-e5c2-4fcd-9c76-e21215a2cabf] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006784226s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-564000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-564000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (1.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-564000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-564000 -n embed-certs-564000
E0920 10:53:21.385718    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/auto-704000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-564000 -n embed-certs-564000: exit status 2 (145.740356ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-564000 -n embed-certs-564000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-564000 -n embed-certs-564000: exit status 2 (144.37246ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-564000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-564000 -n embed-certs-564000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-564000 -n embed-certs-564000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (1.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-465000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.28.2
E0920 10:53:57.558899    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kindnet-704000/client.crt: no such file or directory
E0920 10:54:11.296503    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/no-preload-948000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-465000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.28.2: (47.943607618s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-465000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-465000 --alsologtostderr -v=3
E0920 10:54:19.215717    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/old-k8s-version-770000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-465000 --alsologtostderr -v=3: (8.236853542s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-465000 -n newest-cni-465000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-465000 -n newest-cni-465000: exit status 7 (50.172571ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-465000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-465000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.28.2
E0920 10:54:44.438053    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/auto-704000/client.crt: no such file or directory
E0920 10:54:57.751532    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/kubenet-704000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-465000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.28.2: (35.537210112s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-465000 -n newest-cni-465000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-465000 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-465000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-465000 -n newest-cni-465000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-465000 -n newest-cni-465000: exit status 2 (148.022964ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-465000 -n newest-cni-465000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-465000 -n newest-cni-465000: exit status 2 (147.871364ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-465000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-465000 -n newest-cni-465000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-465000 -n newest-cni-465000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hgtkl" [ae530ae3-2ed1-46d6-ae2c-a77877eaf692] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013354854s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hgtkl" [ae530ae3-2ed1-46d6-ae2c-a77877eaf692] Running
E0920 10:56:46.059577    1784 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15927-1321/.minikube/profiles/functional-477000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008115581s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-931000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-931000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (1.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-931000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-931000 -n default-k8s-diff-port-931000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-931000 -n default-k8s-diff-port-931000: exit status 2 (154.24615ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-931000 -n default-k8s-diff-port-931000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-931000 -n default-k8s-diff-port-931000: exit status 2 (151.152149ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-931000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-931000 -n default-k8s-diff-port-931000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-931000 -n default-k8s-diff-port-931000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (1.84s)

                                                
                                    

Test skip (19/307)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-704000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-704000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-704000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-704000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-704000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-704000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-704000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-704000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-704000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-704000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-704000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-704000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-704000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-704000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-704000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-704000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-704000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-704000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-704000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-704000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-704000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-704000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-704000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-704000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-704000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-704000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-704000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-704000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-704000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-704000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-704000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-704000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-704000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-704000"

                                                
                                                
----------------------- debugLogs end: cilium-704000 [took: 4.965224332s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-704000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-704000
--- SKIP: TestNetworkPlugins/group/cilium (5.34s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-865000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-865000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.36s)

                                                
                                    
Copied to clipboard