Test Report: Hyperkit_macOS 17738

                    
                      8768890baa5a64021183265111cefbb8aeebcf2d:2023-12-08:32200
                    
                

Test fail (5/310)

Order failed test Duration
32 TestAddons/Setup 15.35
190 TestMinikubeProfile 65.49
217 TestMultiNode/serial/ValidateNameConflict 100.45
245 TestStoppedBinaryUpgrade/Upgrade 115.94
280 TestNetworkPlugins/group/calico/Start 15.42
x
+
TestAddons/Setup (15.35s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-249000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p addons-249000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: exit status 90 (15.345483362s)

                                                
                                                
-- stdout --
	* [addons-249000] minikube v1.32.0 on Darwin 14.1.2
	  - MINIKUBE_LOCATION=17738
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17738-1113/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17738-1113/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node addons-249000 in cluster addons-249000
	* Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 10:10:37.874769    1696 out.go:296] Setting OutFile to fd 1 ...
	I1208 10:10:37.875059    1696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 10:10:37.875065    1696 out.go:309] Setting ErrFile to fd 2...
	I1208 10:10:37.875069    1696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 10:10:37.875251    1696 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17738-1113/.minikube/bin
	I1208 10:10:37.876645    1696 out.go:303] Setting JSON to false
	I1208 10:10:37.898690    1696 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":593,"bootTime":1702058444,"procs":423,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1208 10:10:37.898797    1696 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1208 10:10:37.920109    1696 out.go:177] * [addons-249000] minikube v1.32.0 on Darwin 14.1.2
	I1208 10:10:37.961971    1696 out.go:177]   - MINIKUBE_LOCATION=17738
	I1208 10:10:37.962022    1696 notify.go:220] Checking for updates...
	I1208 10:10:37.983875    1696 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17738-1113/kubeconfig
	I1208 10:10:38.004619    1696 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1208 10:10:38.025840    1696 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 10:10:38.046796    1696 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17738-1113/.minikube
	I1208 10:10:38.067777    1696 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 10:10:38.089285    1696 driver.go:392] Setting default libvirt URI to qemu:///system
	I1208 10:10:38.118797    1696 out.go:177] * Using the hyperkit driver based on user configuration
	I1208 10:10:38.160765    1696 start.go:298] selected driver: hyperkit
	I1208 10:10:38.160793    1696 start.go:902] validating driver "hyperkit" against <nil>
	I1208 10:10:38.160815    1696 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 10:10:38.165727    1696 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 10:10:38.165844    1696 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17738-1113/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1208 10:10:38.173733    1696 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1208 10:10:38.177581    1696 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:10:38.177607    1696 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1208 10:10:38.177644    1696 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1208 10:10:38.177839    1696 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 10:10:38.177906    1696 cni.go:84] Creating CNI manager for ""
	I1208 10:10:38.177921    1696 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1208 10:10:38.177929    1696 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1208 10:10:38.177937    1696 start_flags.go:323] config:
	{Name:addons-249000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-249000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1208 10:10:38.178075    1696 iso.go:125] acquiring lock: {Name:mk933f5286cca8a935e46c54218c5cced15285e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 10:10:38.219790    1696 out.go:177] * Starting control plane node addons-249000 in cluster addons-249000
	I1208 10:10:38.240791    1696 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1208 10:10:38.240865    1696 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1208 10:10:38.240892    1696 cache.go:56] Caching tarball of preloaded images
	I1208 10:10:38.241085    1696 preload.go:174] Found /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1208 10:10:38.241106    1696 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1208 10:10:38.241615    1696 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/addons-249000/config.json ...
	I1208 10:10:38.241657    1696 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/addons-249000/config.json: {Name:mk94b249887c0e61b540d7f365726e31ac6f4bf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 10:10:38.242316    1696 start.go:365] acquiring machines lock for addons-249000: {Name:mkf6539d901e554b062746e761b420c8557e3211 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1208 10:10:38.242540    1696 start.go:369] acquired machines lock for "addons-249000" in 200.861µs
	I1208 10:10:38.242583    1696 start.go:93] Provisioning new machine with config: &{Name:addons-249000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:addons-249000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1208 10:10:38.242673    1696 start.go:125] createHost starting for "" (driver="hyperkit")
	I1208 10:10:38.284751    1696 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1208 10:10:38.285116    1696 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:10:38.285171    1696 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1208 10:10:38.293604    1696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49360
	I1208 10:10:38.293984    1696 main.go:141] libmachine: () Calling .GetVersion
	I1208 10:10:38.294393    1696 main.go:141] libmachine: Using API Version  1
	I1208 10:10:38.294403    1696 main.go:141] libmachine: () Calling .SetConfigRaw
	I1208 10:10:38.294616    1696 main.go:141] libmachine: () Calling .GetMachineName
	I1208 10:10:38.294727    1696 main.go:141] libmachine: (addons-249000) Calling .GetMachineName
	I1208 10:10:38.294815    1696 main.go:141] libmachine: (addons-249000) Calling .DriverName
	I1208 10:10:38.294917    1696 start.go:159] libmachine.API.Create for "addons-249000" (driver="hyperkit")
	I1208 10:10:38.294944    1696 client.go:168] LocalClient.Create starting
	I1208 10:10:38.294981    1696 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca.pem
	I1208 10:10:38.453255    1696 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17738-1113/.minikube/certs/cert.pem
	I1208 10:10:38.513672    1696 main.go:141] libmachine: Running pre-create checks...
	I1208 10:10:38.513683    1696 main.go:141] libmachine: (addons-249000) Calling .PreCreateCheck
	I1208 10:10:38.513829    1696 main.go:141] libmachine: (addons-249000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:10:38.514027    1696 main.go:141] libmachine: (addons-249000) Calling .GetConfigRaw
	I1208 10:10:38.514430    1696 main.go:141] libmachine: Creating machine...
	I1208 10:10:38.514443    1696 main.go:141] libmachine: (addons-249000) Calling .Create
	I1208 10:10:38.514530    1696 main.go:141] libmachine: (addons-249000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:10:38.514690    1696 main.go:141] libmachine: (addons-249000) DBG | I1208 10:10:38.514519    1706 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/17738-1113/.minikube
	I1208 10:10:38.514766    1696 main.go:141] libmachine: (addons-249000) Downloading /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17738-1113/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso...
	I1208 10:10:38.719686    1696 main.go:141] libmachine: (addons-249000) DBG | I1208 10:10:38.719602    1706 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/id_rsa...
	I1208 10:10:38.929828    1696 main.go:141] libmachine: (addons-249000) DBG | I1208 10:10:38.929743    1706 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/addons-249000.rawdisk...
	I1208 10:10:38.929841    1696 main.go:141] libmachine: (addons-249000) DBG | Writing magic tar header
	I1208 10:10:38.929850    1696 main.go:141] libmachine: (addons-249000) DBG | Writing SSH key tar header
	I1208 10:10:38.930527    1696 main.go:141] libmachine: (addons-249000) DBG | I1208 10:10:38.930486    1706 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000 ...
	I1208 10:10:39.256830    1696 main.go:141] libmachine: (addons-249000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:10:39.256846    1696 main.go:141] libmachine: (addons-249000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/hyperkit.pid
	I1208 10:10:39.256884    1696 main.go:141] libmachine: (addons-249000) DBG | Using UUID 172457d8-95f5-11ee-9ca2-f01898ef957c
	I1208 10:10:39.491441    1696 main.go:141] libmachine: (addons-249000) DBG | Generated MAC 46:9f:cb:fd:ea:4f
	I1208 10:10:39.491466    1696 main.go:141] libmachine: (addons-249000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-249000
	I1208 10:10:39.491502    1696 main.go:141] libmachine: (addons-249000) DBG | 2023/12/08 10:10:39 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"172457d8-95f5-11ee-9ca2-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e4210)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/initrd", Bootrom:"", CPUs:2, Memory:4000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1208 10:10:39.491559    1696 main.go:141] libmachine: (addons-249000) DBG | 2023/12/08 10:10:39 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"172457d8-95f5-11ee-9ca2-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e4210)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/initrd", Bootrom:"", CPUs:2, Memory:4000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1208 10:10:39.491620    1696 main.go:141] libmachine: (addons-249000) DBG | 2023/12/08 10:10:39 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/hyperkit.pid", "-c", "2", "-m", "4000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "172457d8-95f5-11ee-9ca2-f01898ef957c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/addons-249000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/tty,log=/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/bzimage,/Users/jenkins/minikube-integration/17738-1113/.minikube/machine
s/addons-249000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-249000"}
	I1208 10:10:39.491663    1696 main.go:141] libmachine: (addons-249000) DBG | 2023/12/08 10:10:39 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/hyperkit.pid -c 2 -m 4000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 172457d8-95f5-11ee-9ca2-f01898ef957c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/addons-249000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/tty,log=/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/console-ring -f kexec,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/bzimage,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-249000"
	I1208 10:10:39.491686    1696 main.go:141] libmachine: (addons-249000) DBG | 2023/12/08 10:10:39 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1208 10:10:39.494802    1696 main.go:141] libmachine: (addons-249000) DBG | 2023/12/08 10:10:39 DEBUG: hyperkit: Pid is 1711
	I1208 10:10:39.495236    1696 main.go:141] libmachine: (addons-249000) DBG | Attempt 0
	I1208 10:10:39.495253    1696 main.go:141] libmachine: (addons-249000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:10:39.495358    1696 main.go:141] libmachine: (addons-249000) DBG | hyperkit pid from json: 1711
	I1208 10:10:39.496341    1696 main.go:141] libmachine: (addons-249000) DBG | Searching for 46:9f:cb:fd:ea:4f in /var/db/dhcpd_leases ...
	I1208 10:10:39.496391    1696 main.go:141] libmachine: (addons-249000) DBG | Found 1 entries in /var/db/dhcpd_leases!
	I1208 10:10:39.496408    1696 main.go:141] libmachine: (addons-249000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x6574ad53}
	I1208 10:10:39.515838    1696 main.go:141] libmachine: (addons-249000) DBG | 2023/12/08 10:10:39 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1208 10:10:39.607676    1696 main.go:141] libmachine: (addons-249000) DBG | 2023/12/08 10:10:39 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1208 10:10:39.608511    1696 main.go:141] libmachine: (addons-249000) DBG | 2023/12/08 10:10:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1208 10:10:39.608533    1696 main.go:141] libmachine: (addons-249000) DBG | 2023/12/08 10:10:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1208 10:10:39.608544    1696 main.go:141] libmachine: (addons-249000) DBG | 2023/12/08 10:10:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1208 10:10:39.608552    1696 main.go:141] libmachine: (addons-249000) DBG | 2023/12/08 10:10:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1208 10:10:40.117673    1696 main.go:141] libmachine: (addons-249000) DBG | 2023/12/08 10:10:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1208 10:10:40.117691    1696 main.go:141] libmachine: (addons-249000) DBG | 2023/12/08 10:10:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1208 10:10:40.222702    1696 main.go:141] libmachine: (addons-249000) DBG | 2023/12/08 10:10:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1208 10:10:40.222720    1696 main.go:141] libmachine: (addons-249000) DBG | 2023/12/08 10:10:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1208 10:10:40.222739    1696 main.go:141] libmachine: (addons-249000) DBG | 2023/12/08 10:10:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1208 10:10:40.222750    1696 main.go:141] libmachine: (addons-249000) DBG | 2023/12/08 10:10:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1208 10:10:40.223630    1696 main.go:141] libmachine: (addons-249000) DBG | 2023/12/08 10:10:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1208 10:10:40.223649    1696 main.go:141] libmachine: (addons-249000) DBG | 2023/12/08 10:10:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1208 10:10:41.498973    1696 main.go:141] libmachine: (addons-249000) DBG | Attempt 1
	I1208 10:10:41.498990    1696 main.go:141] libmachine: (addons-249000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:10:41.499047    1696 main.go:141] libmachine: (addons-249000) DBG | hyperkit pid from json: 1711
	I1208 10:10:41.499836    1696 main.go:141] libmachine: (addons-249000) DBG | Searching for 46:9f:cb:fd:ea:4f in /var/db/dhcpd_leases ...
	I1208 10:10:41.499871    1696 main.go:141] libmachine: (addons-249000) DBG | Found 1 entries in /var/db/dhcpd_leases!
	I1208 10:10:41.499882    1696 main.go:141] libmachine: (addons-249000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x6574ad53}
	I1208 10:10:43.500965    1696 main.go:141] libmachine: (addons-249000) DBG | Attempt 2
	I1208 10:10:43.500984    1696 main.go:141] libmachine: (addons-249000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:10:43.501107    1696 main.go:141] libmachine: (addons-249000) DBG | hyperkit pid from json: 1711
	I1208 10:10:43.501895    1696 main.go:141] libmachine: (addons-249000) DBG | Searching for 46:9f:cb:fd:ea:4f in /var/db/dhcpd_leases ...
	I1208 10:10:43.501931    1696 main.go:141] libmachine: (addons-249000) DBG | Found 1 entries in /var/db/dhcpd_leases!
	I1208 10:10:43.501946    1696 main.go:141] libmachine: (addons-249000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x6574ad53}
	I1208 10:10:45.177643    1696 main.go:141] libmachine: (addons-249000) DBG | 2023/12/08 10:10:45 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1208 10:10:45.177673    1696 main.go:141] libmachine: (addons-249000) DBG | 2023/12/08 10:10:45 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1208 10:10:45.177681    1696 main.go:141] libmachine: (addons-249000) DBG | 2023/12/08 10:10:45 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1208 10:10:45.503733    1696 main.go:141] libmachine: (addons-249000) DBG | Attempt 3
	I1208 10:10:45.503755    1696 main.go:141] libmachine: (addons-249000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:10:45.503844    1696 main.go:141] libmachine: (addons-249000) DBG | hyperkit pid from json: 1711
	I1208 10:10:45.504901    1696 main.go:141] libmachine: (addons-249000) DBG | Searching for 46:9f:cb:fd:ea:4f in /var/db/dhcpd_leases ...
	I1208 10:10:45.504926    1696 main.go:141] libmachine: (addons-249000) DBG | Found 1 entries in /var/db/dhcpd_leases!
	I1208 10:10:45.504944    1696 main.go:141] libmachine: (addons-249000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x6574ad53}
	I1208 10:10:47.505974    1696 main.go:141] libmachine: (addons-249000) DBG | Attempt 4
	I1208 10:10:47.505992    1696 main.go:141] libmachine: (addons-249000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:10:47.506058    1696 main.go:141] libmachine: (addons-249000) DBG | hyperkit pid from json: 1711
	I1208 10:10:47.506827    1696 main.go:141] libmachine: (addons-249000) DBG | Searching for 46:9f:cb:fd:ea:4f in /var/db/dhcpd_leases ...
	I1208 10:10:47.506840    1696 main.go:141] libmachine: (addons-249000) DBG | Found 1 entries in /var/db/dhcpd_leases!
	I1208 10:10:47.506868    1696 main.go:141] libmachine: (addons-249000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x6574ad53}
	I1208 10:10:49.508737    1696 main.go:141] libmachine: (addons-249000) DBG | Attempt 5
	I1208 10:10:49.508767    1696 main.go:141] libmachine: (addons-249000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:10:49.508887    1696 main.go:141] libmachine: (addons-249000) DBG | hyperkit pid from json: 1711
	I1208 10:10:49.510288    1696 main.go:141] libmachine: (addons-249000) DBG | Searching for 46:9f:cb:fd:ea:4f in /var/db/dhcpd_leases ...
	I1208 10:10:49.510340    1696 main.go:141] libmachine: (addons-249000) DBG | Found 2 entries in /var/db/dhcpd_leases!
	I1208 10:10:49.510379    1696 main.go:141] libmachine: (addons-249000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:46:9f:cb:fd:ea:4f ID:1,46:9f:cb:fd:ea:4f Lease:0x6574ada8}
	I1208 10:10:49.510421    1696 main.go:141] libmachine: (addons-249000) DBG | Found match: 46:9f:cb:fd:ea:4f
	I1208 10:10:49.510447    1696 main.go:141] libmachine: (addons-249000) DBG | IP: 192.169.0.3
	I1208 10:10:49.510452    1696 main.go:141] libmachine: (addons-249000) Calling .GetConfigRaw
	I1208 10:10:49.511263    1696 main.go:141] libmachine: (addons-249000) Calling .DriverName
	I1208 10:10:49.511414    1696 main.go:141] libmachine: (addons-249000) Calling .DriverName
	I1208 10:10:49.511565    1696 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1208 10:10:49.511579    1696 main.go:141] libmachine: (addons-249000) Calling .GetState
	I1208 10:10:49.511707    1696 main.go:141] libmachine: (addons-249000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:10:49.511782    1696 main.go:141] libmachine: (addons-249000) DBG | hyperkit pid from json: 1711
	I1208 10:10:49.512805    1696 main.go:141] libmachine: Detecting operating system of created instance...
	I1208 10:10:49.512823    1696 main.go:141] libmachine: Waiting for SSH to be available...
	I1208 10:10:49.512831    1696 main.go:141] libmachine: Getting to WaitForSSH function...
	I1208 10:10:49.512839    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHHostname
	I1208 10:10:49.512968    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHPort
	I1208 10:10:49.513068    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHKeyPath
	I1208 10:10:49.513169    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHKeyPath
	I1208 10:10:49.513274    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHUsername
	I1208 10:10:49.513617    1696 main.go:141] libmachine: Using SSH client type: native
	I1208 10:10:49.513889    1696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.3 22 <nil> <nil>}
	I1208 10:10:49.513897    1696 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1208 10:10:49.583545    1696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1208 10:10:49.583558    1696 main.go:141] libmachine: Detecting the provisioner...
	I1208 10:10:49.583564    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHHostname
	I1208 10:10:49.583697    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHPort
	I1208 10:10:49.583793    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHKeyPath
	I1208 10:10:49.583902    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHKeyPath
	I1208 10:10:49.583997    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHUsername
	I1208 10:10:49.584137    1696 main.go:141] libmachine: Using SSH client type: native
	I1208 10:10:49.584393    1696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.3 22 <nil> <nil>}
	I1208 10:10:49.584401    1696 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1208 10:10:49.653822    1696 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0ec83c8-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1208 10:10:49.653878    1696 main.go:141] libmachine: found compatible host: buildroot
	I1208 10:10:49.653885    1696 main.go:141] libmachine: Provisioning with buildroot...
	I1208 10:10:49.653891    1696 main.go:141] libmachine: (addons-249000) Calling .GetMachineName
	I1208 10:10:49.654025    1696 buildroot.go:166] provisioning hostname "addons-249000"
	I1208 10:10:49.654036    1696 main.go:141] libmachine: (addons-249000) Calling .GetMachineName
	I1208 10:10:49.654122    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHHostname
	I1208 10:10:49.654202    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHPort
	I1208 10:10:49.654302    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHKeyPath
	I1208 10:10:49.654392    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHKeyPath
	I1208 10:10:49.654476    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHUsername
	I1208 10:10:49.654631    1696 main.go:141] libmachine: Using SSH client type: native
	I1208 10:10:49.654871    1696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.3 22 <nil> <nil>}
	I1208 10:10:49.654881    1696 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-249000 && echo "addons-249000" | sudo tee /etc/hostname
	I1208 10:10:49.733526    1696 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-249000
	
	I1208 10:10:49.733551    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHHostname
	I1208 10:10:49.733680    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHPort
	I1208 10:10:49.733772    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHKeyPath
	I1208 10:10:49.733859    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHKeyPath
	I1208 10:10:49.733961    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHUsername
	I1208 10:10:49.734079    1696 main.go:141] libmachine: Using SSH client type: native
	I1208 10:10:49.734328    1696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.3 22 <nil> <nil>}
	I1208 10:10:49.734340    1696 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-249000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-249000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-249000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 10:10:49.807589    1696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1208 10:10:49.807609    1696 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17738-1113/.minikube CaCertPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17738-1113/.minikube}
	I1208 10:10:49.807633    1696 buildroot.go:174] setting up certificates
	I1208 10:10:49.807645    1696 provision.go:83] configureAuth start
	I1208 10:10:49.807652    1696 main.go:141] libmachine: (addons-249000) Calling .GetMachineName
	I1208 10:10:49.807789    1696 main.go:141] libmachine: (addons-249000) Calling .GetIP
	I1208 10:10:49.807880    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHHostname
	I1208 10:10:49.807969    1696 provision.go:138] copyHostCerts
	I1208 10:10:49.808058    1696 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17738-1113/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17738-1113/.minikube/cert.pem (1123 bytes)
	I1208 10:10:49.808328    1696 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17738-1113/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17738-1113/.minikube/key.pem (1679 bytes)
	I1208 10:10:49.808530    1696 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17738-1113/.minikube/ca.pem (1078 bytes)
	I1208 10:10:49.808681    1696 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca-key.pem org=jenkins.addons-249000 san=[192.169.0.3 192.169.0.3 localhost 127.0.0.1 minikube addons-249000]
	I1208 10:10:49.902398    1696 provision.go:172] copyRemoteCerts
	I1208 10:10:49.902465    1696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 10:10:49.902481    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHHostname
	I1208 10:10:49.902623    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHPort
	I1208 10:10:49.902718    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHKeyPath
	I1208 10:10:49.902836    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHUsername
	I1208 10:10:49.902925    1696 sshutil.go:53] new ssh client: &{IP:192.169.0.3 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/id_rsa Username:docker}
	I1208 10:10:49.943321    1696 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 10:10:49.958999    1696 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 10:10:49.974604    1696 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1208 10:10:49.990091    1696 provision.go:86] duration metric: configureAuth took 182.387495ms
	I1208 10:10:49.990104    1696 buildroot.go:189] setting minikube options for container-runtime
	I1208 10:10:49.990239    1696 config.go:182] Loaded profile config "addons-249000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1208 10:10:49.990252    1696 main.go:141] libmachine: (addons-249000) Calling .DriverName
	I1208 10:10:49.990386    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHHostname
	I1208 10:10:49.990477    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHPort
	I1208 10:10:49.990562    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHKeyPath
	I1208 10:10:49.990638    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHKeyPath
	I1208 10:10:49.990727    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHUsername
	I1208 10:10:49.990837    1696 main.go:141] libmachine: Using SSH client type: native
	I1208 10:10:49.991072    1696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.3 22 <nil> <nil>}
	I1208 10:10:49.991082    1696 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1208 10:10:50.061707    1696 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1208 10:10:50.061719    1696 buildroot.go:70] root file system type: tmpfs
	I1208 10:10:50.061798    1696 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1208 10:10:50.061811    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHHostname
	I1208 10:10:50.061950    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHPort
	I1208 10:10:50.062046    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHKeyPath
	I1208 10:10:50.062148    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHKeyPath
	I1208 10:10:50.062243    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHUsername
	I1208 10:10:50.062383    1696 main.go:141] libmachine: Using SSH client type: native
	I1208 10:10:50.062635    1696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.3 22 <nil> <nil>}
	I1208 10:10:50.062682    1696 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1208 10:10:50.142705    1696 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1208 10:10:50.142725    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHHostname
	I1208 10:10:50.142883    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHPort
	I1208 10:10:50.142989    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHKeyPath
	I1208 10:10:50.143098    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHKeyPath
	I1208 10:10:50.143187    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHUsername
	I1208 10:10:50.143318    1696 main.go:141] libmachine: Using SSH client type: native
	I1208 10:10:50.143559    1696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.3 22 <nil> <nil>}
	I1208 10:10:50.143572    1696 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1208 10:10:50.622884    1696 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1208 10:10:50.622899    1696 main.go:141] libmachine: Checking connection to Docker...
	I1208 10:10:50.622906    1696 main.go:141] libmachine: (addons-249000) Calling .GetURL
	I1208 10:10:50.623049    1696 main.go:141] libmachine: Docker is up and running!
	I1208 10:10:50.623062    1696 main.go:141] libmachine: Reticulating splines...
	I1208 10:10:50.623066    1696 client.go:171] LocalClient.Create took 12.323605108s
	I1208 10:10:50.623079    1696 start.go:167] duration metric: libmachine.API.Create for "addons-249000" took 12.323650103s
	I1208 10:10:50.623088    1696 start.go:300] post-start starting for "addons-249000" (driver="hyperkit")
	I1208 10:10:50.623098    1696 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 10:10:50.623108    1696 main.go:141] libmachine: (addons-249000) Calling .DriverName
	I1208 10:10:50.623260    1696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 10:10:50.623273    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHHostname
	I1208 10:10:50.623357    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHPort
	I1208 10:10:50.623446    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHKeyPath
	I1208 10:10:50.623541    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHUsername
	I1208 10:10:50.623625    1696 sshutil.go:53] new ssh client: &{IP:192.169.0.3 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/id_rsa Username:docker}
	I1208 10:10:50.664246    1696 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 10:10:50.666931    1696 info.go:137] Remote host: Buildroot 2021.02.12
	I1208 10:10:50.666945    1696 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17738-1113/.minikube/addons for local assets ...
	I1208 10:10:50.667065    1696 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17738-1113/.minikube/files for local assets ...
	I1208 10:10:50.667113    1696 start.go:303] post-start completed in 44.007889ms
	I1208 10:10:50.667131    1696 main.go:141] libmachine: (addons-249000) Calling .GetConfigRaw
	I1208 10:10:50.667679    1696 main.go:141] libmachine: (addons-249000) Calling .GetIP
	I1208 10:10:50.667811    1696 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/addons-249000/config.json ...
	I1208 10:10:50.668153    1696 start.go:128] duration metric: createHost completed in 12.42091574s
	I1208 10:10:50.668168    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHHostname
	I1208 10:10:50.668256    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHPort
	I1208 10:10:50.668350    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHKeyPath
	I1208 10:10:50.668425    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHKeyPath
	I1208 10:10:50.668493    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHUsername
	I1208 10:10:50.668587    1696 main.go:141] libmachine: Using SSH client type: native
	I1208 10:10:50.668819    1696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.3 22 <nil> <nil>}
	I1208 10:10:50.668826    1696 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1208 10:10:50.737882    1696 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702059050.627383625
	
	I1208 10:10:50.737894    1696 fix.go:206] guest clock: 1702059050.627383625
	I1208 10:10:50.737900    1696 fix.go:219] Guest: 2023-12-08 10:10:50.627383625 -0800 PST Remote: 2023-12-08 10:10:50.668163 -0800 PST m=+12.832628634 (delta=-40.779375ms)
	I1208 10:10:50.737922    1696 fix.go:190] guest clock delta is within tolerance: -40.779375ms
	I1208 10:10:50.737926    1696 start.go:83] releasing machines lock for "addons-249000", held for 12.490805354s
	I1208 10:10:50.737944    1696 main.go:141] libmachine: (addons-249000) Calling .DriverName
	I1208 10:10:50.738072    1696 main.go:141] libmachine: (addons-249000) Calling .GetIP
	I1208 10:10:50.738152    1696 main.go:141] libmachine: (addons-249000) Calling .DriverName
	I1208 10:10:50.738465    1696 main.go:141] libmachine: (addons-249000) Calling .DriverName
	I1208 10:10:50.738574    1696 main.go:141] libmachine: (addons-249000) Calling .DriverName
	I1208 10:10:50.738655    1696 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 10:10:50.738682    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHHostname
	I1208 10:10:50.738711    1696 ssh_runner.go:195] Run: cat /version.json
	I1208 10:10:50.738747    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHHostname
	I1208 10:10:50.738793    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHPort
	I1208 10:10:50.738857    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHPort
	I1208 10:10:50.738884    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHKeyPath
	I1208 10:10:50.738934    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHKeyPath
	I1208 10:10:50.738993    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHUsername
	I1208 10:10:50.739018    1696 main.go:141] libmachine: (addons-249000) Calling .GetSSHUsername
	I1208 10:10:50.739070    1696 sshutil.go:53] new ssh client: &{IP:192.169.0.3 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/id_rsa Username:docker}
	I1208 10:10:50.739095    1696 sshutil.go:53] new ssh client: &{IP:192.169.0.3 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/addons-249000/id_rsa Username:docker}
	I1208 10:10:50.844518    1696 ssh_runner.go:195] Run: systemctl --version
	I1208 10:10:50.848730    1696 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 10:10:50.852258    1696 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 10:10:50.852298    1696 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 10:10:50.861977    1696 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1208 10:10:50.861994    1696 start.go:475] detecting cgroup driver to use...
	I1208 10:10:50.862086    1696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 10:10:50.874776    1696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1208 10:10:50.881058    1696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 10:10:50.887289    1696 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 10:10:50.887331    1696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 10:10:50.893595    1696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 10:10:50.900083    1696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 10:10:50.906336    1696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 10:10:50.912550    1696 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 10:10:50.918867    1696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 10:10:50.925121    1696 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 10:10:50.930728    1696 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 10:10:50.936333    1696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 10:10:51.018422    1696 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 10:10:51.031211    1696 start.go:475] detecting cgroup driver to use...
	I1208 10:10:51.031282    1696 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1208 10:10:51.041977    1696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 10:10:51.052848    1696 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 10:10:51.065216    1696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 10:10:51.074166    1696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 10:10:51.082968    1696 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 10:10:51.103961    1696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 10:10:51.113536    1696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 10:10:51.126051    1696 ssh_runner.go:195] Run: which cri-dockerd
	I1208 10:10:51.128469    1696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1208 10:10:51.133946    1696 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1208 10:10:51.144941    1696 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1208 10:10:51.227349    1696 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1208 10:10:51.310253    1696 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1208 10:10:51.310327    1696 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1208 10:10:51.320685    1696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 10:10:51.404868    1696 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1208 10:10:52.636470    1696 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.231310829s)
	I1208 10:10:52.636552    1696 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1208 10:10:52.717726    1696 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1208 10:10:52.810158    1696 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1208 10:10:52.903786    1696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 10:10:52.999638    1696 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1208 10:10:53.011002    1696 ssh_runner.go:195] Run: sudo journalctl --no-pager -u cri-docker.socket
	I1208 10:10:53.039510    1696 out.go:177] 
	W1208 10:10:53.064311    1696 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Fri 2023-12-08 18:10:47 UTC, ends at Fri 2023-12-08 18:10:52 UTC. --
	Dec 08 18:10:48 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 08 18:10:48 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 08 18:10:50 addons-249000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 08 18:10:50 addons-249000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 08 18:10:50 addons-249000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 08 18:10:50 addons-249000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 08 18:10:50 addons-249000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 08 18:10:52 addons-249000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 08 18:10:52 addons-249000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 08 18:10:52 addons-249000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 08 18:10:52 addons-249000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 08 18:10:52 addons-249000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Fri 2023-12-08 18:10:47 UTC, ends at Fri 2023-12-08 18:10:52 UTC. --
	Dec 08 18:10:48 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 08 18:10:48 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 08 18:10:50 addons-249000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 08 18:10:50 addons-249000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 08 18:10:50 addons-249000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 08 18:10:50 addons-249000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 08 18:10:50 addons-249000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 08 18:10:52 addons-249000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 08 18:10:52 addons-249000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 08 18:10:52 addons-249000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 08 18:10:52 addons-249000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 08 18:10:52 addons-249000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	W1208 10:10:53.064347    1696 out.go:239] * 
	* 
	W1208 10:10:53.065545    1696 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 10:10:53.128350    1696 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-darwin-amd64 start -p addons-249000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: exit status 90
--- FAIL: TestAddons/Setup (15.35s)

                                                
                                    
x
+
TestMinikubeProfile (65.49s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-909000 --driver=hyperkit 
E1208 10:19:25.280689    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
E1208 10:19:35.521556    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
E1208 10:19:56.003447    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-909000 --driver=hyperkit : (36.552236651s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-911000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p second-911000 --driver=hyperkit : exit status 90 (15.962865799s)

                                                
                                                
-- stdout --
	* [second-911000] minikube v1.32.0 on Darwin 14.1.2
	  - MINIKUBE_LOCATION=17738
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17738-1113/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17738-1113/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node second-911000 in cluster second-911000
	* Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Fri 2023-12-08 18:20:07 UTC, ends at Fri 2023-12-08 18:20:13 UTC. --
	Dec 08 18:20:08 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 08 18:20:08 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 08 18:20:10 second-911000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 08 18:20:10 second-911000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 08 18:20:10 second-911000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 08 18:20:10 second-911000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 08 18:20:10 second-911000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 08 18:20:13 second-911000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 08 18:20:13 second-911000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 08 18:20:13 second-911000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 08 18:20:13 second-911000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 08 18:20:13 second-911000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-amd64 start -p second-911000 --driver=hyperkit ": exit status 90
panic.go:523: *** TestMinikubeProfile FAILED at 2023-12-08 10:20:13.669579 -0800 PST m=+635.388079384
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p second-911000 -n second-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p second-911000 -n second-911000: exit status 6 (142.523901ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 10:20:13.801220    3010 status.go:415] kubeconfig endpoint: extract IP: "second-911000" does not appear in /Users/jenkins/minikube-integration/17738-1113/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "second-911000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "second-911000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-911000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-911000: (5.31034651s)
panic.go:523: *** TestMinikubeProfile FAILED at 2023-12-08 10:20:19.122805 -0800 PST m=+640.841287980
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p first-909000 -n first-909000
helpers_test.go:244: <<< TestMinikubeProfile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMinikubeProfile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p first-909000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p first-909000 logs -n 25: (1.845535251s)
helpers_test.go:252: TestMinikubeProfile logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|-----------------------------|----------|---------|---------------------|---------------------|
	| Command |                   Args                   |           Profile           |   User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------|-----------------------------|----------|---------|---------------------|---------------------|
	| delete  | -p functional-688000                     | functional-688000           | jenkins  | v1.32.0 | 08 Dec 23 10:15 PST | 08 Dec 23 10:15 PST |
	| start   | -p image-915000                          | image-915000                | jenkins  | v1.32.0 | 08 Dec 23 10:15 PST | 08 Dec 23 10:16 PST |
	|         | --driver=hyperkit                        |                             |          |         |                     |                     |
	| image   | build -t aaa:latest                      | image-915000                | jenkins  | v1.32.0 | 08 Dec 23 10:16 PST | 08 Dec 23 10:16 PST |
	|         | ./testdata/image-build/test-normal       |                             |          |         |                     |                     |
	|         | -p image-915000                          |                             |          |         |                     |                     |
	| image   | build -t aaa:latest                      | image-915000                | jenkins  | v1.32.0 | 08 Dec 23 10:16 PST | 08 Dec 23 10:16 PST |
	|         | --build-opt=build-arg=ENV_A=test_env_str |                             |          |         |                     |                     |
	|         | --build-opt=no-cache                     |                             |          |         |                     |                     |
	|         | ./testdata/image-build/test-arg -p       |                             |          |         |                     |                     |
	|         | image-915000                             |                             |          |         |                     |                     |
	| image   | build -t aaa:latest                      | image-915000                | jenkins  | v1.32.0 | 08 Dec 23 10:16 PST | 08 Dec 23 10:16 PST |
	|         | ./testdata/image-build/test-normal       |                             |          |         |                     |                     |
	|         | --build-opt=no-cache -p                  |                             |          |         |                     |                     |
	|         | image-915000                             |                             |          |         |                     |                     |
	| image   | build -t aaa:latest                      | image-915000                | jenkins  | v1.32.0 | 08 Dec 23 10:16 PST | 08 Dec 23 10:16 PST |
	|         | -f inner/Dockerfile                      |                             |          |         |                     |                     |
	|         | ./testdata/image-build/test-f            |                             |          |         |                     |                     |
	|         | -p image-915000                          |                             |          |         |                     |                     |
	| delete  | -p image-915000                          | image-915000                | jenkins  | v1.32.0 | 08 Dec 23 10:16 PST | 08 Dec 23 10:16 PST |
	| start   | -p ingress-addon-legacy-251000           | ingress-addon-legacy-251000 | jenkins  | v1.32.0 | 08 Dec 23 10:16 PST | 08 Dec 23 10:17 PST |
	|         | --kubernetes-version=v1.18.20            |                             |          |         |                     |                     |
	|         | --memory=4096 --wait=true                |                             |          |         |                     |                     |
	|         | --alsologtostderr -v=5                   |                             |          |         |                     |                     |
	|         | --driver=hyperkit                        |                             |          |         |                     |                     |
	| addons  | ingress-addon-legacy-251000              | ingress-addon-legacy-251000 | jenkins  | v1.32.0 | 08 Dec 23 10:17 PST | 08 Dec 23 10:17 PST |
	|         | addons enable ingress                    |                             |          |         |                     |                     |
	|         | --alsologtostderr -v=5                   |                             |          |         |                     |                     |
	| addons  | ingress-addon-legacy-251000              | ingress-addon-legacy-251000 | jenkins  | v1.32.0 | 08 Dec 23 10:17 PST | 08 Dec 23 10:17 PST |
	|         | addons enable ingress-dns                |                             |          |         |                     |                     |
	|         | --alsologtostderr -v=5                   |                             |          |         |                     |                     |
	| ssh     | ingress-addon-legacy-251000              | ingress-addon-legacy-251000 | jenkins  | v1.32.0 | 08 Dec 23 10:18 PST | 08 Dec 23 10:18 PST |
	|         | ssh curl -s http://127.0.0.1/            |                             |          |         |                     |                     |
	|         | -H 'Host: nginx.example.com'             |                             |          |         |                     |                     |
	| ip      | ingress-addon-legacy-251000 ip           | ingress-addon-legacy-251000 | jenkins  | v1.32.0 | 08 Dec 23 10:18 PST | 08 Dec 23 10:18 PST |
	| addons  | ingress-addon-legacy-251000              | ingress-addon-legacy-251000 | jenkins  | v1.32.0 | 08 Dec 23 10:18 PST | 08 Dec 23 10:18 PST |
	|         | addons disable ingress-dns               |                             |          |         |                     |                     |
	|         | --alsologtostderr -v=1                   |                             |          |         |                     |                     |
	| addons  | ingress-addon-legacy-251000              | ingress-addon-legacy-251000 | jenkins  | v1.32.0 | 08 Dec 23 10:18 PST | 08 Dec 23 10:18 PST |
	|         | addons disable ingress                   |                             |          |         |                     |                     |
	|         | --alsologtostderr -v=1                   |                             |          |         |                     |                     |
	| delete  | -p ingress-addon-legacy-251000           | ingress-addon-legacy-251000 | jenkins  | v1.32.0 | 08 Dec 23 10:18 PST | 08 Dec 23 10:18 PST |
	| start   | -p json-output-102000                    | json-output-102000          | testUser | v1.32.0 | 08 Dec 23 10:18 PST | 08 Dec 23 10:19 PST |
	|         | --output=json --user=testUser            |                             |          |         |                     |                     |
	|         | --memory=2200 --wait=true                |                             |          |         |                     |                     |
	|         | --driver=hyperkit                        |                             |          |         |                     |                     |
	| pause   | -p json-output-102000                    | json-output-102000          | testUser | v1.32.0 | 08 Dec 23 10:19 PST | 08 Dec 23 10:19 PST |
	|         | --output=json --user=testUser            |                             |          |         |                     |                     |
	| unpause | -p json-output-102000                    | json-output-102000          | testUser | v1.32.0 | 08 Dec 23 10:19 PST | 08 Dec 23 10:19 PST |
	|         | --output=json --user=testUser            |                             |          |         |                     |                     |
	| stop    | -p json-output-102000                    | json-output-102000          | testUser | v1.32.0 | 08 Dec 23 10:19 PST | 08 Dec 23 10:19 PST |
	|         | --output=json --user=testUser            |                             |          |         |                     |                     |
	| delete  | -p json-output-102000                    | json-output-102000          | jenkins  | v1.32.0 | 08 Dec 23 10:19 PST | 08 Dec 23 10:19 PST |
	| start   | -p json-output-error-894000              | json-output-error-894000    | jenkins  | v1.32.0 | 08 Dec 23 10:19 PST |                     |
	|         | --memory=2200 --output=json              |                             |          |         |                     |                     |
	|         | --wait=true --driver=fail                |                             |          |         |                     |                     |
	| delete  | -p json-output-error-894000              | json-output-error-894000    | jenkins  | v1.32.0 | 08 Dec 23 10:19 PST | 08 Dec 23 10:19 PST |
	| start   | -p first-909000                          | first-909000                | jenkins  | v1.32.0 | 08 Dec 23 10:19 PST | 08 Dec 23 10:19 PST |
	|         | --driver=hyperkit                        |                             |          |         |                     |                     |
	| start   | -p second-911000                         | second-911000               | jenkins  | v1.32.0 | 08 Dec 23 10:19 PST |                     |
	|         | --driver=hyperkit                        |                             |          |         |                     |                     |
	| delete  | -p second-911000                         | second-911000               | jenkins  | v1.32.0 | 08 Dec 23 10:20 PST | 08 Dec 23 10:20 PST |
	|---------|------------------------------------------|-----------------------------|----------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/08 10:19:57
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.4 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 10:19:57.759155    2995 out.go:296] Setting OutFile to fd 1 ...
	I1208 10:19:57.759445    2995 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 10:19:57.759448    2995 out.go:309] Setting ErrFile to fd 2...
	I1208 10:19:57.759451    2995 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 10:19:57.759624    2995 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17738-1113/.minikube/bin
	I1208 10:19:57.761159    2995 out.go:303] Setting JSON to false
	I1208 10:19:57.783426    2995 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1153,"bootTime":1702058444,"procs":424,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1208 10:19:57.783518    2995 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1208 10:19:57.807204    2995 out.go:177] * [second-911000] minikube v1.32.0 on Darwin 14.1.2
	I1208 10:19:57.873289    2995 out.go:177]   - MINIKUBE_LOCATION=17738
	I1208 10:19:57.849050    2995 notify.go:220] Checking for updates...
	I1208 10:19:57.913979    2995 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17738-1113/kubeconfig
	I1208 10:19:57.935031    2995 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1208 10:19:57.971009    2995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 10:19:58.013021    2995 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17738-1113/.minikube
	I1208 10:19:58.055071    2995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 10:19:58.079074    2995 config.go:182] Loaded profile config "first-909000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1208 10:19:58.079214    2995 driver.go:392] Setting default libvirt URI to qemu:///system
	I1208 10:19:58.107922    2995 out.go:177] * Using the hyperkit driver based on user configuration
	I1208 10:19:58.149940    2995 start.go:298] selected driver: hyperkit
	I1208 10:19:58.149952    2995 start.go:902] validating driver "hyperkit" against <nil>
	I1208 10:19:58.149966    2995 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 10:19:58.150129    2995 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 10:19:58.150269    2995 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17738-1113/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1208 10:19:58.158611    2995 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1208 10:19:58.162431    2995 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:19:58.162449    2995 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1208 10:19:58.162478    2995 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1208 10:19:58.165169    2995 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I1208 10:19:58.165324    2995 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1208 10:19:58.165384    2995 cni.go:84] Creating CNI manager for ""
	I1208 10:19:58.165398    2995 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1208 10:19:58.165406    2995 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1208 10:19:58.165413    2995 start_flags.go:323] config:
	{Name:second-911000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:second-911000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1208 10:19:58.165555    2995 iso.go:125] acquiring lock: {Name:mk933f5286cca8a935e46c54218c5cced15285e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 10:19:58.208037    2995 out.go:177] * Starting control plane node second-911000 in cluster second-911000
	I1208 10:19:58.228934    2995 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1208 10:19:58.228980    2995 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1208 10:19:58.229002    2995 cache.go:56] Caching tarball of preloaded images
	I1208 10:19:58.229154    2995 preload.go:174] Found /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1208 10:19:58.229169    2995 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1208 10:19:58.229293    2995 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/second-911000/config.json ...
	I1208 10:19:58.229321    2995 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/second-911000/config.json: {Name:mkf4ec08fde3af8e77429f7ec61dd2d228bda3b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 10:19:58.229984    2995 start.go:365] acquiring machines lock for second-911000: {Name:mkf6539d901e554b062746e761b420c8557e3211 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1208 10:19:58.230059    2995 start.go:369] acquired machines lock for "second-911000" in 61.74µs
	I1208 10:19:58.230098    2995 start.go:93] Provisioning new machine with config: &{Name:second-911000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:second-911000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1208 10:19:58.230182    2995 start.go:125] createHost starting for "" (driver="hyperkit")
	I1208 10:19:58.273967    2995 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I1208 10:19:58.274233    2995 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:19:58.274273    2995 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1208 10:19:58.282310    2995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50612
	I1208 10:19:58.282651    2995 main.go:141] libmachine: () Calling .GetVersion
	I1208 10:19:58.283055    2995 main.go:141] libmachine: Using API Version  1
	I1208 10:19:58.283062    2995 main.go:141] libmachine: () Calling .SetConfigRaw
	I1208 10:19:58.283282    2995 main.go:141] libmachine: () Calling .GetMachineName
	I1208 10:19:58.283374    2995 main.go:141] libmachine: (second-911000) Calling .GetMachineName
	I1208 10:19:58.283457    2995 main.go:141] libmachine: (second-911000) Calling .DriverName
	I1208 10:19:58.283542    2995 start.go:159] libmachine.API.Create for "second-911000" (driver="hyperkit")
	I1208 10:19:58.283565    2995 client.go:168] LocalClient.Create starting
	I1208 10:19:58.283598    2995 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca.pem
	I1208 10:19:58.283631    2995 main.go:141] libmachine: Decoding PEM data...
	I1208 10:19:58.283644    2995 main.go:141] libmachine: Parsing certificate...
	I1208 10:19:58.283698    2995 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17738-1113/.minikube/certs/cert.pem
	I1208 10:19:58.283722    2995 main.go:141] libmachine: Decoding PEM data...
	I1208 10:19:58.283732    2995 main.go:141] libmachine: Parsing certificate...
	I1208 10:19:58.283744    2995 main.go:141] libmachine: Running pre-create checks...
	I1208 10:19:58.283748    2995 main.go:141] libmachine: (second-911000) Calling .PreCreateCheck
	I1208 10:19:58.283814    2995 main.go:141] libmachine: (second-911000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:19:58.283970    2995 main.go:141] libmachine: (second-911000) Calling .GetConfigRaw
	I1208 10:19:58.284376    2995 main.go:141] libmachine: Creating machine...
	I1208 10:19:58.284382    2995 main.go:141] libmachine: (second-911000) Calling .Create
	I1208 10:19:58.284443    2995 main.go:141] libmachine: (second-911000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:19:58.284598    2995 main.go:141] libmachine: (second-911000) DBG | I1208 10:19:58.284437    3003 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/17738-1113/.minikube
	I1208 10:19:58.284643    2995 main.go:141] libmachine: (second-911000) Downloading /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17738-1113/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso...
	I1208 10:19:58.453802    2995 main.go:141] libmachine: (second-911000) DBG | I1208 10:19:58.453731    3003 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/id_rsa...
	I1208 10:19:58.513687    2995 main.go:141] libmachine: (second-911000) DBG | I1208 10:19:58.513624    3003 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/second-911000.rawdisk...
	I1208 10:19:58.513702    2995 main.go:141] libmachine: (second-911000) DBG | Writing magic tar header
	I1208 10:19:58.513712    2995 main.go:141] libmachine: (second-911000) DBG | Writing SSH key tar header
	I1208 10:19:58.514387    2995 main.go:141] libmachine: (second-911000) DBG | I1208 10:19:58.514357    3003 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000 ...
	I1208 10:19:58.845122    2995 main.go:141] libmachine: (second-911000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:19:58.845149    2995 main.go:141] libmachine: (second-911000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/hyperkit.pid
	I1208 10:19:58.845169    2995 main.go:141] libmachine: (second-911000) DBG | Using UUID 64ebe99e-95f6-11ee-8514-f01898ef957c
	I1208 10:19:58.869556    2995 main.go:141] libmachine: (second-911000) DBG | Generated MAC da:9c:f9:88:b3:17
	I1208 10:19:58.869571    2995 main.go:141] libmachine: (second-911000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=second-911000
	I1208 10:19:58.869604    2995 main.go:141] libmachine: (second-911000) DBG | 2023/12/08 10:19:58 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"64ebe99e-95f6-11ee-8514-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011d1d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1208 10:19:58.869631    2995 main.go:141] libmachine: (second-911000) DBG | 2023/12/08 10:19:58 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"64ebe99e-95f6-11ee-8514-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011d1d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1208 10:19:58.869675    2995 main.go:141] libmachine: (second-911000) DBG | 2023/12/08 10:19:58 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/hyperkit.pid", "-c", "2", "-m", "6000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "64ebe99e-95f6-11ee-8514-f01898ef957c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/second-911000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/tty,log=/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/bzimage,/Users/jenkins/minikube-integration/17738-1113/.minikube/machine
s/second-911000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=second-911000"}
	I1208 10:19:58.869704    2995 main.go:141] libmachine: (second-911000) DBG | 2023/12/08 10:19:58 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/hyperkit.pid -c 2 -m 6000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 64ebe99e-95f6-11ee-8514-f01898ef957c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/second-911000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/tty,log=/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/console-ring -f kexec,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/bzimage,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=second-911000"
	I1208 10:19:58.869715    2995 main.go:141] libmachine: (second-911000) DBG | 2023/12/08 10:19:58 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1208 10:19:58.872624    2995 main.go:141] libmachine: (second-911000) DBG | 2023/12/08 10:19:58 DEBUG: hyperkit: Pid is 3004
	I1208 10:19:58.873049    2995 main.go:141] libmachine: (second-911000) DBG | Attempt 0
	I1208 10:19:58.873066    2995 main.go:141] libmachine: (second-911000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:19:58.873129    2995 main.go:141] libmachine: (second-911000) DBG | hyperkit pid from json: 3004
	I1208 10:19:58.874013    2995 main.go:141] libmachine: (second-911000) DBG | Searching for da:9c:f9:88:b3:17 in /var/db/dhcpd_leases ...
	I1208 10:19:58.874061    2995 main.go:141] libmachine: (second-911000) DBG | Found 8 entries in /var/db/dhcpd_leases!
	I1208 10:19:58.874070    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:5a:d6:c2:1e:af:27 ID:1,5a:d6:c2:1e:af:27 Lease:0x6574afb3}
	I1208 10:19:58.874079    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:6:95:a1:20:d1:95 ID:1,6:95:a1:20:d1:95 Lease:0x6574af76}
	I1208 10:19:58.874084    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:6e:ce:da:98:ef:83 ID:1,6e:ce:da:98:ef:83 Lease:0x6574af04}
	I1208 10:19:58.874089    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e6:66:8f:7e:be:1b ID:1,e6:66:8f:7e:be:1b Lease:0x65735d6e}
	I1208 10:19:58.874098    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:a7:30:b6:e9:1e ID:1,1a:a7:30:b6:e9:1e Lease:0x6574ade8}
	I1208 10:19:58.874106    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:54:d0:4d:4d:3b ID:1,de:54:d0:4d:4d:3b Lease:0x6574adbc}
	I1208 10:19:58.874111    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:46:9f:cb:fd:ea:4f ID:1,46:9f:cb:fd:ea:4f Lease:0x6574ada8}
	I1208 10:19:58.874130    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x6574ad53}
	I1208 10:19:58.880186    2995 main.go:141] libmachine: (second-911000) DBG | 2023/12/08 10:19:58 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1208 10:19:58.890410    2995 main.go:141] libmachine: (second-911000) DBG | 2023/12/08 10:19:58 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1208 10:19:58.891187    2995 main.go:141] libmachine: (second-911000) DBG | 2023/12/08 10:19:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1208 10:19:58.891211    2995 main.go:141] libmachine: (second-911000) DBG | 2023/12/08 10:19:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1208 10:19:58.891223    2995 main.go:141] libmachine: (second-911000) DBG | 2023/12/08 10:19:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1208 10:19:58.891241    2995 main.go:141] libmachine: (second-911000) DBG | 2023/12/08 10:19:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1208 10:19:59.457004    2995 main.go:141] libmachine: (second-911000) DBG | 2023/12/08 10:19:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1208 10:19:59.457014    2995 main.go:141] libmachine: (second-911000) DBG | 2023/12/08 10:19:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1208 10:19:59.562003    2995 main.go:141] libmachine: (second-911000) DBG | 2023/12/08 10:19:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1208 10:19:59.562021    2995 main.go:141] libmachine: (second-911000) DBG | 2023/12/08 10:19:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1208 10:19:59.562029    2995 main.go:141] libmachine: (second-911000) DBG | 2023/12/08 10:19:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1208 10:19:59.562036    2995 main.go:141] libmachine: (second-911000) DBG | 2023/12/08 10:19:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1208 10:19:59.562918    2995 main.go:141] libmachine: (second-911000) DBG | 2023/12/08 10:19:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1208 10:19:59.562929    2995 main.go:141] libmachine: (second-911000) DBG | 2023/12/08 10:19:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1208 10:20:00.874687    2995 main.go:141] libmachine: (second-911000) DBG | Attempt 1
	I1208 10:20:00.874697    2995 main.go:141] libmachine: (second-911000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:20:00.874806    2995 main.go:141] libmachine: (second-911000) DBG | hyperkit pid from json: 3004
	I1208 10:20:00.875591    2995 main.go:141] libmachine: (second-911000) DBG | Searching for da:9c:f9:88:b3:17 in /var/db/dhcpd_leases ...
	I1208 10:20:00.875641    2995 main.go:141] libmachine: (second-911000) DBG | Found 8 entries in /var/db/dhcpd_leases!
	I1208 10:20:00.875649    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:5a:d6:c2:1e:af:27 ID:1,5a:d6:c2:1e:af:27 Lease:0x6574afb3}
	I1208 10:20:00.875657    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:6:95:a1:20:d1:95 ID:1,6:95:a1:20:d1:95 Lease:0x6574af76}
	I1208 10:20:00.875662    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:6e:ce:da:98:ef:83 ID:1,6e:ce:da:98:ef:83 Lease:0x6574af04}
	I1208 10:20:00.875668    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e6:66:8f:7e:be:1b ID:1,e6:66:8f:7e:be:1b Lease:0x65735d6e}
	I1208 10:20:00.875678    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:a7:30:b6:e9:1e ID:1,1a:a7:30:b6:e9:1e Lease:0x6574ade8}
	I1208 10:20:00.875683    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:54:d0:4d:4d:3b ID:1,de:54:d0:4d:4d:3b Lease:0x6574adbc}
	I1208 10:20:00.875695    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:46:9f:cb:fd:ea:4f ID:1,46:9f:cb:fd:ea:4f Lease:0x6574ada8}
	I1208 10:20:00.875700    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x6574ad53}
	I1208 10:20:02.876666    2995 main.go:141] libmachine: (second-911000) DBG | Attempt 2
	I1208 10:20:02.876677    2995 main.go:141] libmachine: (second-911000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:20:02.876760    2995 main.go:141] libmachine: (second-911000) DBG | hyperkit pid from json: 3004
	I1208 10:20:02.877554    2995 main.go:141] libmachine: (second-911000) DBG | Searching for da:9c:f9:88:b3:17 in /var/db/dhcpd_leases ...
	I1208 10:20:02.877599    2995 main.go:141] libmachine: (second-911000) DBG | Found 8 entries in /var/db/dhcpd_leases!
	I1208 10:20:02.877609    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:5a:d6:c2:1e:af:27 ID:1,5a:d6:c2:1e:af:27 Lease:0x6574afb3}
	I1208 10:20:02.877621    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:6:95:a1:20:d1:95 ID:1,6:95:a1:20:d1:95 Lease:0x6574af76}
	I1208 10:20:02.877626    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:6e:ce:da:98:ef:83 ID:1,6e:ce:da:98:ef:83 Lease:0x6574af04}
	I1208 10:20:02.877632    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e6:66:8f:7e:be:1b ID:1,e6:66:8f:7e:be:1b Lease:0x65735d6e}
	I1208 10:20:02.877637    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:a7:30:b6:e9:1e ID:1,1a:a7:30:b6:e9:1e Lease:0x6574ade8}
	I1208 10:20:02.877653    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:54:d0:4d:4d:3b ID:1,de:54:d0:4d:4d:3b Lease:0x6574adbc}
	I1208 10:20:02.877667    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:46:9f:cb:fd:ea:4f ID:1,46:9f:cb:fd:ea:4f Lease:0x6574ada8}
	I1208 10:20:02.877677    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x6574ad53}
	I1208 10:20:04.562581    2995 main.go:141] libmachine: (second-911000) DBG | 2023/12/08 10:20:04 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1208 10:20:04.562593    2995 main.go:141] libmachine: (second-911000) DBG | 2023/12/08 10:20:04 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1208 10:20:04.562604    2995 main.go:141] libmachine: (second-911000) DBG | 2023/12/08 10:20:04 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1208 10:20:04.878414    2995 main.go:141] libmachine: (second-911000) DBG | Attempt 3
	I1208 10:20:04.878429    2995 main.go:141] libmachine: (second-911000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:20:04.878530    2995 main.go:141] libmachine: (second-911000) DBG | hyperkit pid from json: 3004
	I1208 10:20:04.879907    2995 main.go:141] libmachine: (second-911000) DBG | Searching for da:9c:f9:88:b3:17 in /var/db/dhcpd_leases ...
	I1208 10:20:04.879952    2995 main.go:141] libmachine: (second-911000) DBG | Found 8 entries in /var/db/dhcpd_leases!
	I1208 10:20:04.879978    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:5a:d6:c2:1e:af:27 ID:1,5a:d6:c2:1e:af:27 Lease:0x6574afb3}
	I1208 10:20:04.880005    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:6:95:a1:20:d1:95 ID:1,6:95:a1:20:d1:95 Lease:0x6574af76}
	I1208 10:20:04.880030    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:6e:ce:da:98:ef:83 ID:1,6e:ce:da:98:ef:83 Lease:0x6574af04}
	I1208 10:20:04.880043    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e6:66:8f:7e:be:1b ID:1,e6:66:8f:7e:be:1b Lease:0x65735d6e}
	I1208 10:20:04.880056    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:a7:30:b6:e9:1e ID:1,1a:a7:30:b6:e9:1e Lease:0x6574ade8}
	I1208 10:20:04.880067    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:54:d0:4d:4d:3b ID:1,de:54:d0:4d:4d:3b Lease:0x6574adbc}
	I1208 10:20:04.880080    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:46:9f:cb:fd:ea:4f ID:1,46:9f:cb:fd:ea:4f Lease:0x6574ada8}
	I1208 10:20:04.880094    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x6574ad53}
	I1208 10:20:06.880114    2995 main.go:141] libmachine: (second-911000) DBG | Attempt 4
	I1208 10:20:06.880129    2995 main.go:141] libmachine: (second-911000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:20:06.880202    2995 main.go:141] libmachine: (second-911000) DBG | hyperkit pid from json: 3004
	I1208 10:20:06.880981    2995 main.go:141] libmachine: (second-911000) DBG | Searching for da:9c:f9:88:b3:17 in /var/db/dhcpd_leases ...
	I1208 10:20:06.881028    2995 main.go:141] libmachine: (second-911000) DBG | Found 8 entries in /var/db/dhcpd_leases!
	I1208 10:20:06.881035    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:5a:d6:c2:1e:af:27 ID:1,5a:d6:c2:1e:af:27 Lease:0x6574afb3}
	I1208 10:20:06.881050    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:6:95:a1:20:d1:95 ID:1,6:95:a1:20:d1:95 Lease:0x6574af76}
	I1208 10:20:06.881056    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:6e:ce:da:98:ef:83 ID:1,6e:ce:da:98:ef:83 Lease:0x6574af04}
	I1208 10:20:06.881065    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e6:66:8f:7e:be:1b ID:1,e6:66:8f:7e:be:1b Lease:0x65735d6e}
	I1208 10:20:06.881076    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:a7:30:b6:e9:1e ID:1,1a:a7:30:b6:e9:1e Lease:0x6574ade8}
	I1208 10:20:06.881082    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:54:d0:4d:4d:3b ID:1,de:54:d0:4d:4d:3b Lease:0x6574adbc}
	I1208 10:20:06.881087    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:46:9f:cb:fd:ea:4f ID:1,46:9f:cb:fd:ea:4f Lease:0x6574ada8}
	I1208 10:20:06.881093    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x6574ad53}
	I1208 10:20:08.882331    2995 main.go:141] libmachine: (second-911000) DBG | Attempt 5
	I1208 10:20:08.882348    2995 main.go:141] libmachine: (second-911000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:20:08.882400    2995 main.go:141] libmachine: (second-911000) DBG | hyperkit pid from json: 3004
	I1208 10:20:08.883239    2995 main.go:141] libmachine: (second-911000) DBG | Searching for da:9c:f9:88:b3:17 in /var/db/dhcpd_leases ...
	I1208 10:20:08.883361    2995 main.go:141] libmachine: (second-911000) DBG | Found 9 entries in /var/db/dhcpd_leases!
	I1208 10:20:08.883374    2995 main.go:141] libmachine: (second-911000) Calling .GetConfigRaw
	I1208 10:20:08.883377    2995 main.go:141] libmachine: (second-911000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:da:9c:f9:88:b3:17 ID:1,da:9c:f9:88:b3:17 Lease:0x6574afd8}
	I1208 10:20:08.883401    2995 main.go:141] libmachine: (second-911000) DBG | Found match: da:9c:f9:88:b3:17
	I1208 10:20:08.883413    2995 main.go:141] libmachine: (second-911000) DBG | IP: 192.169.0.10
	I1208 10:20:08.883979    2995 main.go:141] libmachine: (second-911000) Calling .DriverName
	I1208 10:20:08.884084    2995 main.go:141] libmachine: (second-911000) Calling .DriverName
	I1208 10:20:08.884261    2995 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1208 10:20:08.884284    2995 main.go:141] libmachine: (second-911000) Calling .GetState
	I1208 10:20:08.884374    2995 main.go:141] libmachine: (second-911000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:20:08.884427    2995 main.go:141] libmachine: (second-911000) DBG | hyperkit pid from json: 3004
	I1208 10:20:08.885266    2995 main.go:141] libmachine: Detecting operating system of created instance...
	I1208 10:20:08.885274    2995 main.go:141] libmachine: Waiting for SSH to be available...
	I1208 10:20:08.885278    2995 main.go:141] libmachine: Getting to WaitForSSH function...
	I1208 10:20:08.885282    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHHostname
	I1208 10:20:08.885377    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHPort
	I1208 10:20:08.885477    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHKeyPath
	I1208 10:20:08.885559    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHKeyPath
	I1208 10:20:08.885649    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHUsername
	I1208 10:20:08.885802    2995 main.go:141] libmachine: Using SSH client type: native
	I1208 10:20:08.886166    2995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.10 22 <nil> <nil>}
	I1208 10:20:08.886170    2995 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1208 10:20:09.942761    2995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1208 10:20:09.942768    2995 main.go:141] libmachine: Detecting the provisioner...
	I1208 10:20:09.942773    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHHostname
	I1208 10:20:09.942898    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHPort
	I1208 10:20:09.942988    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHKeyPath
	I1208 10:20:09.943071    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHKeyPath
	I1208 10:20:09.943140    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHUsername
	I1208 10:20:09.943262    2995 main.go:141] libmachine: Using SSH client type: native
	I1208 10:20:09.943517    2995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.10 22 <nil> <nil>}
	I1208 10:20:09.943522    2995 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1208 10:20:09.998664    2995 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0ec83c8-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1208 10:20:09.998727    2995 main.go:141] libmachine: found compatible host: buildroot
	I1208 10:20:09.998731    2995 main.go:141] libmachine: Provisioning with buildroot...
	I1208 10:20:09.998735    2995 main.go:141] libmachine: (second-911000) Calling .GetMachineName
	I1208 10:20:09.998866    2995 buildroot.go:166] provisioning hostname "second-911000"
	I1208 10:20:09.998878    2995 main.go:141] libmachine: (second-911000) Calling .GetMachineName
	I1208 10:20:09.998981    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHHostname
	I1208 10:20:09.999058    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHPort
	I1208 10:20:09.999154    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHKeyPath
	I1208 10:20:09.999247    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHKeyPath
	I1208 10:20:09.999326    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHUsername
	I1208 10:20:09.999451    2995 main.go:141] libmachine: Using SSH client type: native
	I1208 10:20:09.999694    2995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.10 22 <nil> <nil>}
	I1208 10:20:09.999703    2995 main.go:141] libmachine: About to run SSH command:
	sudo hostname second-911000 && echo "second-911000" | sudo tee /etc/hostname
	I1208 10:20:10.064098    2995 main.go:141] libmachine: SSH cmd err, output: <nil>: second-911000
	
	I1208 10:20:10.064111    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHHostname
	I1208 10:20:10.064236    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHPort
	I1208 10:20:10.064325    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHKeyPath
	I1208 10:20:10.064396    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHKeyPath
	I1208 10:20:10.064471    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHUsername
	I1208 10:20:10.064605    2995 main.go:141] libmachine: Using SSH client type: native
	I1208 10:20:10.064859    2995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.10 22 <nil> <nil>}
	I1208 10:20:10.064867    2995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\ssecond-911000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 second-911000/g' /etc/hosts;
				else 
					echo '127.0.1.1 second-911000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 10:20:10.125470    2995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1208 10:20:10.125483    2995 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17738-1113/.minikube CaCertPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17738-1113/.minikube}
	I1208 10:20:10.125491    2995 buildroot.go:174] setting up certificates
	I1208 10:20:10.125503    2995 provision.go:83] configureAuth start
	I1208 10:20:10.125508    2995 main.go:141] libmachine: (second-911000) Calling .GetMachineName
	I1208 10:20:10.125637    2995 main.go:141] libmachine: (second-911000) Calling .GetIP
	I1208 10:20:10.125713    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHHostname
	I1208 10:20:10.125818    2995 provision.go:138] copyHostCerts
	I1208 10:20:10.125887    2995 exec_runner.go:144] found /Users/jenkins/minikube-integration/17738-1113/.minikube/ca.pem, removing ...
	I1208 10:20:10.125894    2995 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17738-1113/.minikube/ca.pem
	I1208 10:20:10.126015    2995 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17738-1113/.minikube/ca.pem (1078 bytes)
	I1208 10:20:10.126234    2995 exec_runner.go:144] found /Users/jenkins/minikube-integration/17738-1113/.minikube/cert.pem, removing ...
	I1208 10:20:10.126237    2995 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17738-1113/.minikube/cert.pem
	I1208 10:20:10.126307    2995 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17738-1113/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17738-1113/.minikube/cert.pem (1123 bytes)
	I1208 10:20:10.126470    2995 exec_runner.go:144] found /Users/jenkins/minikube-integration/17738-1113/.minikube/key.pem, removing ...
	I1208 10:20:10.126473    2995 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17738-1113/.minikube/key.pem
	I1208 10:20:10.126533    2995 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17738-1113/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17738-1113/.minikube/key.pem (1679 bytes)
	I1208 10:20:10.126670    2995 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca-key.pem org=jenkins.second-911000 san=[192.169.0.10 192.169.0.10 localhost 127.0.0.1 minikube second-911000]
	I1208 10:20:10.238716    2995 provision.go:172] copyRemoteCerts
	I1208 10:20:10.238779    2995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 10:20:10.238797    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHHostname
	I1208 10:20:10.238962    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHPort
	I1208 10:20:10.239070    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHKeyPath
	I1208 10:20:10.239157    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHUsername
	I1208 10:20:10.239244    2995 sshutil.go:53] new ssh client: &{IP:192.169.0.10 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/id_rsa Username:docker}
	I1208 10:20:10.272777    2995 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 10:20:10.289602    2995 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1208 10:20:10.306934    2995 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 10:20:10.325488    2995 provision.go:86] duration metric: configureAuth took 199.973075ms
	I1208 10:20:10.325499    2995 buildroot.go:189] setting minikube options for container-runtime
	I1208 10:20:10.325637    2995 config.go:182] Loaded profile config "second-911000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1208 10:20:10.325648    2995 main.go:141] libmachine: (second-911000) Calling .DriverName
	I1208 10:20:10.325806    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHHostname
	I1208 10:20:10.325907    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHPort
	I1208 10:20:10.325989    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHKeyPath
	I1208 10:20:10.326070    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHKeyPath
	I1208 10:20:10.326157    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHUsername
	I1208 10:20:10.326283    2995 main.go:141] libmachine: Using SSH client type: native
	I1208 10:20:10.326548    2995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.10 22 <nil> <nil>}
	I1208 10:20:10.326557    2995 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1208 10:20:10.384712    2995 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1208 10:20:10.384719    2995 buildroot.go:70] root file system type: tmpfs
	I1208 10:20:10.384801    2995 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1208 10:20:10.384818    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHHostname
	I1208 10:20:10.384955    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHPort
	I1208 10:20:10.385048    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHKeyPath
	I1208 10:20:10.385140    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHKeyPath
	I1208 10:20:10.385247    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHUsername
	I1208 10:20:10.385384    2995 main.go:141] libmachine: Using SSH client type: native
	I1208 10:20:10.385663    2995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.10 22 <nil> <nil>}
	I1208 10:20:10.385713    2995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1208 10:20:10.451566    2995 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1208 10:20:10.451587    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHHostname
	I1208 10:20:10.451738    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHPort
	I1208 10:20:10.451830    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHKeyPath
	I1208 10:20:10.451905    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHKeyPath
	I1208 10:20:10.451996    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHUsername
	I1208 10:20:10.452105    2995 main.go:141] libmachine: Using SSH client type: native
	I1208 10:20:10.452351    2995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.10 22 <nil> <nil>}
	I1208 10:20:10.452360    2995 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1208 10:20:10.958495    2995 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1208 10:20:10.958506    2995 main.go:141] libmachine: Checking connection to Docker...
	I1208 10:20:10.958510    2995 main.go:141] libmachine: (second-911000) Calling .GetURL
	I1208 10:20:10.958641    2995 main.go:141] libmachine: Docker is up and running!
	I1208 10:20:10.958645    2995 main.go:141] libmachine: Reticulating splines...
	I1208 10:20:10.958649    2995 client.go:171] LocalClient.Create took 12.675038619s
	I1208 10:20:10.958658    2995 start.go:167] duration metric: libmachine.API.Create for "second-911000" took 12.67507493s
	I1208 10:20:10.958666    2995 start.go:300] post-start starting for "second-911000" (driver="hyperkit")
	I1208 10:20:10.958674    2995 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 10:20:10.958681    2995 main.go:141] libmachine: (second-911000) Calling .DriverName
	I1208 10:20:10.958807    2995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 10:20:10.958815    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHHostname
	I1208 10:20:10.958897    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHPort
	I1208 10:20:10.958979    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHKeyPath
	I1208 10:20:10.959074    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHUsername
	I1208 10:20:10.959153    2995 sshutil.go:53] new ssh client: &{IP:192.169.0.10 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/id_rsa Username:docker}
	I1208 10:20:10.993096    2995 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 10:20:10.995728    2995 info.go:137] Remote host: Buildroot 2021.02.12
	I1208 10:20:10.995738    2995 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17738-1113/.minikube/addons for local assets ...
	I1208 10:20:10.995816    2995 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17738-1113/.minikube/files for local assets ...
	I1208 10:20:10.995942    2995 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17738-1113/.minikube/files/etc/ssl/certs/15852.pem -> 15852.pem in /etc/ssl/certs
	I1208 10:20:10.996110    2995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 10:20:11.001800    2995 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17738-1113/.minikube/files/etc/ssl/certs/15852.pem --> /etc/ssl/certs/15852.pem (1708 bytes)
	I1208 10:20:11.017896    2995 start.go:303] post-start completed in 59.223477ms
	I1208 10:20:11.017916    2995 main.go:141] libmachine: (second-911000) Calling .GetConfigRaw
	I1208 10:20:11.018483    2995 main.go:141] libmachine: (second-911000) Calling .GetIP
	I1208 10:20:11.018622    2995 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/second-911000/config.json ...
	I1208 10:20:11.018933    2995 start.go:128] duration metric: createHost completed in 12.788697799s
	I1208 10:20:11.018945    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHHostname
	I1208 10:20:11.019029    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHPort
	I1208 10:20:11.019110    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHKeyPath
	I1208 10:20:11.019187    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHKeyPath
	I1208 10:20:11.019256    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHUsername
	I1208 10:20:11.019356    2995 main.go:141] libmachine: Using SSH client type: native
	I1208 10:20:11.019589    2995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.10 22 <nil> <nil>}
	I1208 10:20:11.019593    2995 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1208 10:20:11.074397    2995 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702059611.134043453
	
	I1208 10:20:11.074403    2995 fix.go:206] guest clock: 1702059611.134043453
	I1208 10:20:11.074407    2995 fix.go:219] Guest: 2023-12-08 10:20:11.134043453 -0800 PST Remote: 2023-12-08 10:20:11.018939 -0800 PST m=+13.303889119 (delta=115.104453ms)
	I1208 10:20:11.074426    2995 fix.go:190] guest clock delta is within tolerance: 115.104453ms
	I1208 10:20:11.074434    2995 start.go:83] releasing machines lock for "second-911000", held for 12.844323038s
	I1208 10:20:11.074450    2995 main.go:141] libmachine: (second-911000) Calling .DriverName
	I1208 10:20:11.074583    2995 main.go:141] libmachine: (second-911000) Calling .GetIP
	I1208 10:20:11.074679    2995 main.go:141] libmachine: (second-911000) Calling .DriverName
	I1208 10:20:11.074968    2995 main.go:141] libmachine: (second-911000) Calling .DriverName
	I1208 10:20:11.075068    2995 main.go:141] libmachine: (second-911000) Calling .DriverName
	I1208 10:20:11.075148    2995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 10:20:11.075174    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHHostname
	I1208 10:20:11.075203    2995 ssh_runner.go:195] Run: cat /version.json
	I1208 10:20:11.075210    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHHostname
	I1208 10:20:11.075270    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHPort
	I1208 10:20:11.075291    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHPort
	I1208 10:20:11.075385    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHKeyPath
	I1208 10:20:11.075396    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHKeyPath
	I1208 10:20:11.075463    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHUsername
	I1208 10:20:11.075473    2995 main.go:141] libmachine: (second-911000) Calling .GetSSHUsername
	I1208 10:20:11.075540    2995 sshutil.go:53] new ssh client: &{IP:192.169.0.10 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/id_rsa Username:docker}
	I1208 10:20:11.075553    2995 sshutil.go:53] new ssh client: &{IP:192.169.0.10 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/second-911000/id_rsa Username:docker}
	I1208 10:20:11.156827    2995 ssh_runner.go:195] Run: systemctl --version
	I1208 10:20:11.160482    2995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 10:20:11.163950    2995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 10:20:11.163994    2995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 10:20:11.174205    2995 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1208 10:20:11.174216    2995 start.go:475] detecting cgroup driver to use...
	I1208 10:20:11.174312    2995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 10:20:11.186255    2995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1208 10:20:11.192855    2995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 10:20:11.199356    2995 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 10:20:11.199396    2995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 10:20:11.205859    2995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 10:20:11.212453    2995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 10:20:11.219079    2995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 10:20:11.226733    2995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 10:20:11.233504    2995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 10:20:11.239935    2995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 10:20:11.245672    2995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 10:20:11.251522    2995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 10:20:11.336156    2995 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 10:20:11.348829    2995 start.go:475] detecting cgroup driver to use...
	I1208 10:20:11.348896    2995 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1208 10:20:11.362081    2995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 10:20:11.378303    2995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 10:20:11.391474    2995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 10:20:11.400502    2995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 10:20:11.410486    2995 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 10:20:11.437022    2995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 10:20:11.445771    2995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 10:20:11.457644    2995 ssh_runner.go:195] Run: which cri-dockerd
	I1208 10:20:11.459968    2995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1208 10:20:11.465575    2995 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1208 10:20:11.476874    2995 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1208 10:20:11.562436    2995 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1208 10:20:11.657573    2995 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1208 10:20:11.657645    2995 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1208 10:20:11.668814    2995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 10:20:11.752427    2995 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1208 10:20:13.114183    2995 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.361737681s)
	I1208 10:20:13.114236    2995 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1208 10:20:13.194536    2995 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1208 10:20:13.290743    2995 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1208 10:20:13.393779    2995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 10:20:13.490376    2995 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1208 10:20:13.501586    2995 ssh_runner.go:195] Run: sudo journalctl --no-pager -u cri-docker.socket
	I1208 10:20:13.538353    2995 out.go:177] 
	W1208 10:20:13.559521    2995 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Fri 2023-12-08 18:20:07 UTC, ends at Fri 2023-12-08 18:20:13 UTC. --
	Dec 08 18:20:08 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 08 18:20:08 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 08 18:20:10 second-911000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 08 18:20:10 second-911000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 08 18:20:10 second-911000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 08 18:20:10 second-911000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 08 18:20:10 second-911000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 08 18:20:13 second-911000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 08 18:20:13 second-911000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 08 18:20:13 second-911000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 08 18:20:13 second-911000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 08 18:20:13 second-911000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	W1208 10:20:13.559543    2995 out.go:239] * 
	W1208 10:20:13.560195    2995 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 10:20:13.624480    2995 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Fri 2023-12-08 18:19:30 UTC, ends at Fri 2023-12-08 18:20:19 UTC. --
	Dec 08 18:20:09 first-909000 dockerd[1199]: time="2023-12-08T18:20:09.112670467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 08 18:20:09 first-909000 dockerd[1199]: time="2023-12-08T18:20:09.234274334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 08 18:20:09 first-909000 dockerd[1199]: time="2023-12-08T18:20:09.234646635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 08 18:20:09 first-909000 dockerd[1199]: time="2023-12-08T18:20:09.236886202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 08 18:20:09 first-909000 dockerd[1199]: time="2023-12-08T18:20:09.236912066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 08 18:20:09 first-909000 cri-dockerd[1083]: time="2023-12-08T18:20:09Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0706947a0ebbf1c043fb64c4e68d00eec43eca95c120453a217a85e42b8c75a5/resolv.conf as [nameserver 192.169.0.1]"
	Dec 08 18:20:09 first-909000 dockerd[1199]: time="2023-12-08T18:20:09.370567874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 08 18:20:09 first-909000 dockerd[1199]: time="2023-12-08T18:20:09.372221629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 08 18:20:09 first-909000 dockerd[1199]: time="2023-12-08T18:20:09.372238944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 08 18:20:09 first-909000 dockerd[1199]: time="2023-12-08T18:20:09.372246618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 08 18:20:09 first-909000 cri-dockerd[1083]: time="2023-12-08T18:20:09Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/25cec61b98795b899865a21af64aa5cbb51949de975f1c876dbfd7df9d361860/resolv.conf as [nameserver 192.169.0.1]"
	Dec 08 18:20:09 first-909000 dockerd[1199]: time="2023-12-08T18:20:09.511903430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 08 18:20:09 first-909000 dockerd[1199]: time="2023-12-08T18:20:09.512041252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 08 18:20:09 first-909000 dockerd[1199]: time="2023-12-08T18:20:09.512065080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 08 18:20:09 first-909000 dockerd[1199]: time="2023-12-08T18:20:09.512150204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 08 18:20:10 first-909000 dockerd[1199]: time="2023-12-08T18:20:10.286642185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 08 18:20:10 first-909000 dockerd[1199]: time="2023-12-08T18:20:10.286720617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 08 18:20:10 first-909000 dockerd[1199]: time="2023-12-08T18:20:10.286755503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 08 18:20:10 first-909000 dockerd[1199]: time="2023-12-08T18:20:10.286767211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 08 18:20:10 first-909000 cri-dockerd[1083]: time="2023-12-08T18:20:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/329eda477f933f342cf80199d5086f5f77f546ea0426ed63864673892d1e6b4c/resolv.conf as [nameserver 192.169.0.1]"
	Dec 08 18:20:10 first-909000 dockerd[1199]: time="2023-12-08T18:20:10.668656969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 08 18:20:10 first-909000 dockerd[1199]: time="2023-12-08T18:20:10.668739779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 08 18:20:10 first-909000 dockerd[1199]: time="2023-12-08T18:20:10.668760010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 08 18:20:10 first-909000 dockerd[1199]: time="2023-12-08T18:20:10.668771100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 08 18:20:17 first-909000 cri-dockerd[1083]: time="2023-12-08T18:20:17Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	26d2fb793599c       ead0a4a53df89       9 seconds ago       Running             coredns                   0                   329eda477f933       coredns-5dd5756b68-rkf9r
	92c3a71ac1234       6e38f40d628db       10 seconds ago      Running             storage-provisioner       0                   25cec61b98795       storage-provisioner
	8e0e43a466cbf       83f6cc407eed8       10 seconds ago      Running             kube-proxy                0                   0706947a0ebbf       kube-proxy-2btm5
	7393b826ba29d       e3db313c6dbc0       29 seconds ago      Running             kube-scheduler            0                   f7836c45b7ab3       kube-scheduler-first-909000
	cca7952c59495       d058aa5ab969c       30 seconds ago      Running             kube-controller-manager   0                   dbe9480eb79c7       kube-controller-manager-first-909000
	a488c849d269a       7fe0e6f37db33       30 seconds ago      Running             kube-apiserver            0                   2b00beb6cafa9       kube-apiserver-first-909000
	f30516db8763a       73deb9a3f7025       30 seconds ago      Running             etcd                      0                   6b60afa2f1d34       etcd-first-909000
	
	* 
	* ==> coredns [26d2fb793599] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38952 - 14497 "HINFO IN 7122734190421361033.7814638073247765412. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01250149s
	
	* 
	* ==> describe nodes <==
	* Name:               first-909000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=first-909000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4117b3e3d296a64e59281c5525848e6479e0626b
	                    minikube.k8s.io/name=first-909000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_08T10_19_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Dec 2023 18:19:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  first-909000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Dec 2023 18:20:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Dec 2023 18:20:17 +0000   Fri, 08 Dec 2023 18:19:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Dec 2023 18:20:17 +0000   Fri, 08 Dec 2023 18:19:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Dec 2023 18:20:17 +0000   Fri, 08 Dec 2023 18:19:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Dec 2023 18:20:17 +0000   Fri, 08 Dec 2023 18:19:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.9
	  Hostname:    first-909000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             5925796Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             5925796Ki
	  pods:               110
	System Info:
	  Machine ID:                 c07e5c4d06644067bb7e5b6c6454998b
	  System UUID:                4f1211ee-0000-0000-9ec3-f01898ef957c
	  Boot ID:                    bbb6d1d9-e1c8-4b1a-a2d0-38a131cde6bf
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-rkf9r                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     11s
	  kube-system                 etcd-first-909000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         25s
	  kube-system                 kube-apiserver-first-909000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 kube-controller-manager-first-909000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                 kube-proxy-2btm5                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  kube-system                 kube-scheduler-first-909000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  kube-system                 storage-provisioner                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10s   kube-proxy       
	  Normal  Starting                 25s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  25s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  25s   kubelet          Node first-909000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s   kubelet          Node first-909000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s   kubelet          Node first-909000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                22s   kubelet          Node first-909000 status is now: NodeReady
	  Normal  RegisteredNode           12s   node-controller  Node first-909000 event: Registered Node first-909000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.008731] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.085572] systemd-fstab-generator[125]: Ignoring "noauto" for root device
	[  +0.040182] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.907186] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +3.061548] systemd-fstab-generator[548]: Ignoring "noauto" for root device
	[  +0.099754] systemd-fstab-generator[567]: Ignoring "noauto" for root device
	[  +0.693534] systemd-fstab-generator[747]: Ignoring "noauto" for root device
	[  +0.217444] systemd-fstab-generator[784]: Ignoring "noauto" for root device
	[  +0.085949] systemd-fstab-generator[796]: Ignoring "noauto" for root device
	[  +0.102251] systemd-fstab-generator[809]: Ignoring "noauto" for root device
	[  +1.239272] kauditd_printk_skb: 16 callbacks suppressed
	[  +0.164442] systemd-fstab-generator[973]: Ignoring "noauto" for root device
	[  +0.102545] systemd-fstab-generator[1009]: Ignoring "noauto" for root device
	[  +0.085439] systemd-fstab-generator[1020]: Ignoring "noauto" for root device
	[  +0.087601] systemd-fstab-generator[1031]: Ignoring "noauto" for root device
	[  +0.111242] systemd-fstab-generator[1052]: Ignoring "noauto" for root device
	[  +5.502608] systemd-fstab-generator[1183]: Ignoring "noauto" for root device
	[  +1.480434] kauditd_printk_skb: 29 callbacks suppressed
	[  +3.025664] systemd-fstab-generator[1561]: Ignoring "noauto" for root device
	[  +6.730822] systemd-fstab-generator[2457]: Ignoring "noauto" for root device
	[Dec 8 18:20] kauditd_printk_skb: 39 callbacks suppressed
	
	* 
	* ==> etcd [f30516db8763] <==
	* {"level":"info","ts":"2023-12-08T18:19:50.131173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a18840637a90865f switched to configuration voters=(11639624032941278815)"}
	{"level":"info","ts":"2023-12-08T18:19:50.131273Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"349a45023fb6d7e","local-member-id":"a18840637a90865f","added-peer-id":"a18840637a90865f","added-peer-peer-urls":["https://192.169.0.9:2380"]}
	{"level":"info","ts":"2023-12-08T18:19:50.131853Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-08T18:19:50.131937Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.9:2380"}
	{"level":"info","ts":"2023-12-08T18:19:50.131944Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.9:2380"}
	{"level":"info","ts":"2023-12-08T18:19:50.132512Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"a18840637a90865f","initial-advertise-peer-urls":["https://192.169.0.9:2380"],"listen-peer-urls":["https://192.169.0.9:2380"],"advertise-client-urls":["https://192.169.0.9:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.9:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-08T18:19:50.132532Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-08T18:19:50.819688Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a18840637a90865f is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-08T18:19:50.819863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a18840637a90865f became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-08T18:19:50.819977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a18840637a90865f received MsgPreVoteResp from a18840637a90865f at term 1"}
	{"level":"info","ts":"2023-12-08T18:19:50.820029Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a18840637a90865f became candidate at term 2"}
	{"level":"info","ts":"2023-12-08T18:19:50.820136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a18840637a90865f received MsgVoteResp from a18840637a90865f at term 2"}
	{"level":"info","ts":"2023-12-08T18:19:50.82019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a18840637a90865f became leader at term 2"}
	{"level":"info","ts":"2023-12-08T18:19:50.820295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a18840637a90865f elected leader a18840637a90865f at term 2"}
	{"level":"info","ts":"2023-12-08T18:19:50.8273Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a18840637a90865f","local-member-attributes":"{Name:first-909000 ClientURLs:[https://192.169.0.9:2379]}","request-path":"/0/members/a18840637a90865f/attributes","cluster-id":"349a45023fb6d7e","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-08T18:19:50.827387Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-08T18:19:50.828049Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-08T18:19:50.828315Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-08T18:19:50.828669Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.9:2379"}
	{"level":"info","ts":"2023-12-08T18:19:50.828734Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-08T18:19:50.842568Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"349a45023fb6d7e","local-member-id":"a18840637a90865f","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-08T18:19:50.849313Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-08T18:19:50.862439Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-08T18:19:50.850107Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-08T18:19:50.888085Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  18:20:20 up 0 min,  0 users,  load average: 0.33, 0.10, 0.04
	Linux first-909000 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [a488c849d269] <==
	* I1208 18:19:52.140535       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1208 18:19:52.140643       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1208 18:19:52.143280       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1208 18:19:52.143378       1 aggregator.go:166] initial CRD sync complete...
	I1208 18:19:52.143456       1 autoregister_controller.go:141] Starting autoregister controller
	I1208 18:19:52.143538       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1208 18:19:52.143628       1 cache.go:39] Caches are synced for autoregister controller
	I1208 18:19:52.147558       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1208 18:19:52.170998       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1208 18:19:52.374180       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1208 18:19:53.055123       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1208 18:19:53.059125       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1208 18:19:53.059134       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1208 18:19:53.358574       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1208 18:19:53.396403       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1208 18:19:53.489418       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1208 18:19:53.494271       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.169.0.9]
	I1208 18:19:53.494967       1 controller.go:624] quota admission added evaluator for: endpoints
	I1208 18:19:53.497793       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1208 18:19:54.133657       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1208 18:19:55.042583       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1208 18:19:55.048878       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1208 18:19:55.058337       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1208 18:20:08.833128       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1208 18:20:08.932841       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [cca7952c5949] <==
	* I1208 18:20:08.130792       1 shared_informer.go:318] Caches are synced for taint
	I1208 18:20:08.130883       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1208 18:20:08.131004       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="first-909000"
	I1208 18:20:08.131115       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1208 18:20:08.131299       1 event.go:307] "Event occurred" object="first-909000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node first-909000 event: Registered Node first-909000 in Controller"
	I1208 18:20:08.131450       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I1208 18:20:08.131487       1 taint_manager.go:210] "Sending events to api server"
	I1208 18:20:08.132939       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1208 18:20:08.140877       1 shared_informer.go:318] Caches are synced for resource quota
	I1208 18:20:08.183589       1 shared_informer.go:318] Caches are synced for disruption
	I1208 18:20:08.218511       1 shared_informer.go:318] Caches are synced for resource quota
	I1208 18:20:08.232091       1 shared_informer.go:318] Caches are synced for deployment
	I1208 18:20:08.568492       1 shared_informer.go:318] Caches are synced for garbage collector
	I1208 18:20:08.633232       1 shared_informer.go:318] Caches are synced for garbage collector
	I1208 18:20:08.633294       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1208 18:20:08.838830       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-2btm5"
	I1208 18:20:08.935344       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 1"
	I1208 18:20:09.037222       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-rkf9r"
	I1208 18:20:09.043816       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.939849ms"
	I1208 18:20:09.060812       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.822392ms"
	I1208 18:20:09.061026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="37.465µs"
	I1208 18:20:09.101022       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="51.263µs"
	I1208 18:20:11.618688       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="43.772µs"
	I1208 18:20:11.635522       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.208062ms"
	I1208 18:20:11.636189       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.593µs"
	
	* 
	* ==> kube-proxy [8e0e43a466cb] <==
	* I1208 18:20:09.503511       1 server_others.go:69] "Using iptables proxy"
	I1208 18:20:09.517120       1 node.go:141] Successfully retrieved node IP: 192.169.0.9
	I1208 18:20:09.559871       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1208 18:20:09.563431       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1208 18:20:09.573281       1 server_others.go:152] "Using iptables Proxier"
	I1208 18:20:09.573328       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1208 18:20:09.573433       1 server.go:846] "Version info" version="v1.28.4"
	I1208 18:20:09.573460       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 18:20:09.574273       1 config.go:188] "Starting service config controller"
	I1208 18:20:09.574344       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1208 18:20:09.574359       1 config.go:97] "Starting endpoint slice config controller"
	I1208 18:20:09.574362       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1208 18:20:09.576609       1 config.go:315] "Starting node config controller"
	I1208 18:20:09.576660       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1208 18:20:09.675426       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1208 18:20:09.675611       1 shared_informer.go:318] Caches are synced for service config
	I1208 18:20:09.676932       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [7393b826ba29] <==
	* W1208 18:19:52.114803       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1208 18:19:52.114845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1208 18:19:52.114964       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1208 18:19:52.114997       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1208 18:19:52.115164       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1208 18:19:52.115236       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1208 18:19:52.115247       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1208 18:19:52.115253       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1208 18:19:52.115392       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1208 18:19:52.115483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1208 18:19:52.115542       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1208 18:19:52.115649       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1208 18:19:52.115799       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1208 18:19:52.115827       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1208 18:19:52.933243       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1208 18:19:52.933261       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1208 18:19:53.012848       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1208 18:19:53.012887       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1208 18:19:53.139960       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1208 18:19:53.139997       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1208 18:19:53.142362       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1208 18:19:53.142446       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1208 18:19:53.249095       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1208 18:19:53.249193       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1208 18:19:55.899209       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Fri 2023-12-08 18:19:30 UTC, ends at Fri 2023-12-08 18:20:21 UTC. --
	Dec 08 18:19:56 first-909000 kubelet[2470]: I1208 18:19:56.317501    2470 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-first-909000" podStartSLOduration=2.317429818 podCreationTimestamp="2023-12-08 18:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-08 18:19:56.311489856 +0000 UTC m=+1.293477393" watchObservedRunningTime="2023-12-08 18:19:56.317429818 +0000 UTC m=+1.299417354"
	Dec 08 18:19:58 first-909000 kubelet[2470]: I1208 18:19:58.309405    2470 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 08 18:20:08 first-909000 kubelet[2470]: I1208 18:20:08.142018    2470 topology_manager.go:215] "Topology Admit Handler" podUID="4295afa1-cc2b-4ebb-8530-badd5b0b04ba" podNamespace="kube-system" podName="storage-provisioner"
	Dec 08 18:20:08 first-909000 kubelet[2470]: I1208 18:20:08.158006    2470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdz68\" (UniqueName: \"kubernetes.io/projected/4295afa1-cc2b-4ebb-8530-badd5b0b04ba-kube-api-access-tdz68\") pod \"storage-provisioner\" (UID: \"4295afa1-cc2b-4ebb-8530-badd5b0b04ba\") " pod="kube-system/storage-provisioner"
	Dec 08 18:20:08 first-909000 kubelet[2470]: I1208 18:20:08.158116    2470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4295afa1-cc2b-4ebb-8530-badd5b0b04ba-tmp\") pod \"storage-provisioner\" (UID: \"4295afa1-cc2b-4ebb-8530-badd5b0b04ba\") " pod="kube-system/storage-provisioner"
	Dec 08 18:20:08 first-909000 kubelet[2470]: E1208 18:20:08.263315    2470 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 08 18:20:08 first-909000 kubelet[2470]: E1208 18:20:08.263353    2470 projected.go:198] Error preparing data for projected volume kube-api-access-tdz68 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Dec 08 18:20:08 first-909000 kubelet[2470]: E1208 18:20:08.263400    2470 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4295afa1-cc2b-4ebb-8530-badd5b0b04ba-kube-api-access-tdz68 podName:4295afa1-cc2b-4ebb-8530-badd5b0b04ba nodeName:}" failed. No retries permitted until 2023-12-08 18:20:08.763387066 +0000 UTC m=+12.701842228 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tdz68" (UniqueName: "kubernetes.io/projected/4295afa1-cc2b-4ebb-8530-badd5b0b04ba-kube-api-access-tdz68") pod "storage-provisioner" (UID: "4295afa1-cc2b-4ebb-8530-badd5b0b04ba") : configmap "kube-root-ca.crt" not found
	Dec 08 18:20:08 first-909000 kubelet[2470]: I1208 18:20:08.847189    2470 topology_manager.go:215] "Topology Admit Handler" podUID="ce37a9fc-a56b-4b8c-840e-38c93807a9b5" podNamespace="kube-system" podName="kube-proxy-2btm5"
	Dec 08 18:20:08 first-909000 kubelet[2470]: I1208 18:20:08.962165    2470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce37a9fc-a56b-4b8c-840e-38c93807a9b5-xtables-lock\") pod \"kube-proxy-2btm5\" (UID: \"ce37a9fc-a56b-4b8c-840e-38c93807a9b5\") " pod="kube-system/kube-proxy-2btm5"
	Dec 08 18:20:08 first-909000 kubelet[2470]: I1208 18:20:08.962316    2470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvd9p\" (UniqueName: \"kubernetes.io/projected/ce37a9fc-a56b-4b8c-840e-38c93807a9b5-kube-api-access-qvd9p\") pod \"kube-proxy-2btm5\" (UID: \"ce37a9fc-a56b-4b8c-840e-38c93807a9b5\") " pod="kube-system/kube-proxy-2btm5"
	Dec 08 18:20:08 first-909000 kubelet[2470]: I1208 18:20:08.962398    2470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ce37a9fc-a56b-4b8c-840e-38c93807a9b5-kube-proxy\") pod \"kube-proxy-2btm5\" (UID: \"ce37a9fc-a56b-4b8c-840e-38c93807a9b5\") " pod="kube-system/kube-proxy-2btm5"
	Dec 08 18:20:08 first-909000 kubelet[2470]: I1208 18:20:08.962479    2470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce37a9fc-a56b-4b8c-840e-38c93807a9b5-lib-modules\") pod \"kube-proxy-2btm5\" (UID: \"ce37a9fc-a56b-4b8c-840e-38c93807a9b5\") " pod="kube-system/kube-proxy-2btm5"
	Dec 08 18:20:09 first-909000 kubelet[2470]: I1208 18:20:09.040896    2470 topology_manager.go:215] "Topology Admit Handler" podUID="95b07dfe-bca1-4489-a016-2169770aaf8c" podNamespace="kube-system" podName="coredns-5dd5756b68-rkf9r"
	Dec 08 18:20:09 first-909000 kubelet[2470]: W1208 18:20:09.047670    2470 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:first-909000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'first-909000' and this object
	Dec 08 18:20:09 first-909000 kubelet[2470]: E1208 18:20:09.047714    2470 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:first-909000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'first-909000' and this object
	Dec 08 18:20:09 first-909000 kubelet[2470]: I1208 18:20:09.063620    2470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgm7l\" (UniqueName: \"kubernetes.io/projected/95b07dfe-bca1-4489-a016-2169770aaf8c-kube-api-access-fgm7l\") pod \"coredns-5dd5756b68-rkf9r\" (UID: \"95b07dfe-bca1-4489-a016-2169770aaf8c\") " pod="kube-system/coredns-5dd5756b68-rkf9r"
	Dec 08 18:20:09 first-909000 kubelet[2470]: I1208 18:20:09.063683    2470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95b07dfe-bca1-4489-a016-2169770aaf8c-config-volume\") pod \"coredns-5dd5756b68-rkf9r\" (UID: \"95b07dfe-bca1-4489-a016-2169770aaf8c\") " pod="kube-system/coredns-5dd5756b68-rkf9r"
	Dec 08 18:20:09 first-909000 kubelet[2470]: I1208 18:20:09.467543    2470 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25cec61b98795b899865a21af64aa5cbb51949de975f1c876dbfd7df9d361860"
	Dec 08 18:20:10 first-909000 kubelet[2470]: I1208 18:20:10.500804    2470 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2btm5" podStartSLOduration=2.500748421 podCreationTimestamp="2023-12-08 18:20:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-08 18:20:10.485380355 +0000 UTC m=+14.423835518" watchObservedRunningTime="2023-12-08 18:20:10.500748421 +0000 UTC m=+14.439203584"
	Dec 08 18:20:10 first-909000 kubelet[2470]: I1208 18:20:10.598783    2470 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="329eda477f933f342cf80199d5086f5f77f546ea0426ed63864673892d1e6b4c"
	Dec 08 18:20:11 first-909000 kubelet[2470]: I1208 18:20:11.618870    2470 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.618811409 podCreationTimestamp="2023-12-08 18:19:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-08 18:20:10.501884805 +0000 UTC m=+14.440339968" watchObservedRunningTime="2023-12-08 18:20:11.618811409 +0000 UTC m=+15.557266571"
	Dec 08 18:20:11 first-909000 kubelet[2470]: I1208 18:20:11.618960    2470 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-rkf9r" podStartSLOduration=2.618948235 podCreationTimestamp="2023-12-08 18:20:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-08 18:20:11.617884562 +0000 UTC m=+15.556339725" watchObservedRunningTime="2023-12-08 18:20:11.618948235 +0000 UTC m=+15.557403398"
	Dec 08 18:20:17 first-909000 kubelet[2470]: I1208 18:20:17.111436    2470 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 08 18:20:17 first-909000 kubelet[2470]: I1208 18:20:17.112808    2470 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	
	* 
	* ==> storage-provisioner [92c3a71ac123] <==
	* I1208 18:20:09.584251       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p first-909000 -n first-909000
helpers_test.go:261: (dbg) Run:  kubectl --context first-909000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMinikubeProfile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "first-909000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-909000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-909000: (5.25937655s)
--- FAIL: TestMinikubeProfile (65.49s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (100.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-261000
multinode_test.go:480: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-261000-m02 --driver=hyperkit 
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-261000-m02 --driver=hyperkit : exit status 14 (400.500446ms)

                                                
                                                
-- stdout --
	* [multinode-261000-m02] minikube v1.32.0 on Darwin 14.1.2
	  - MINIKUBE_LOCATION=17738
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17738-1113/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17738-1113/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-261000-m02' is duplicated with machine name 'multinode-261000-m02' in profile 'multinode-261000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-261000-m03 --driver=hyperkit 
multinode_test.go:488: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-261000-m03 --driver=hyperkit : exit status 90 (16.003138115s)

                                                
                                                
-- stdout --
	* [multinode-261000-m03] minikube v1.32.0 on Darwin 14.1.2
	  - MINIKUBE_LOCATION=17738
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17738-1113/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17738-1113/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node multinode-261000-m03 in cluster multinode-261000-m03
	* Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Fri 2023-12-08 18:30:19 UTC, ends at Fri 2023-12-08 18:30:25 UTC. --
	Dec 08 18:30:19 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 08 18:30:19 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 08 18:30:22 multinode-261000-m03 systemd[1]: cri-docker.socket: Succeeded.
	Dec 08 18:30:22 multinode-261000-m03 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 08 18:30:22 multinode-261000-m03 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 08 18:30:22 multinode-261000-m03 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 08 18:30:22 multinode-261000-m03 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 08 18:30:25 multinode-261000-m03 systemd[1]: cri-docker.socket: Succeeded.
	Dec 08 18:30:25 multinode-261000-m03 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 08 18:30:25 multinode-261000-m03 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 08 18:30:25 multinode-261000-m03 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 08 18:30:25 multinode-261000-m03 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:490: failed to start profile. args "out/minikube-darwin-amd64 start -p multinode-261000-m03 --driver=hyperkit " : exit status 90
multinode_test.go:495: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-261000
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-261000: exit status 80 (263.253236ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-261000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-261000-m03 already exists in multinode-261000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-261000-m03
multinode_test.go:500: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-261000-m03: (8.627698304s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-261000 -n multinode-261000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-261000 -n multinode-261000: exit status 3 (1m15.09414228s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 10:31:49.503801    3857 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.13:22: connect: operation timed out
	E1208 10:31:49.503818    3857 status.go:249] status error: NewSession: new client: new client: dial tcp 192.169.0.13:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-261000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (100.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (115.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.1388774082.exe start -p stopped-upgrade-200000 --memory=2200 --vm-driver=hyperkit 
E1208 10:45:27.130798    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/skaffold-097000/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.1388774082.exe start -p stopped-upgrade-200000 --memory=2200 --vm-driver=hyperkit : (1m24.80579821s)
version_upgrade_test.go:205: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.1388774082.exe -p stopped-upgrade-200000 stop
version_upgrade_test.go:205: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.1388774082.exe -p stopped-upgrade-200000 stop: (8.084902561s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-200000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E1208 10:46:49.050126    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/skaffold-097000/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p stopped-upgrade-200000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : exit status 90 (23.037414604s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-200000] minikube v1.32.0 on Darwin 14.1.2
	  - MINIKUBE_LOCATION=17738
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17738-1113/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17738-1113/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the hyperkit driver based on existing profile
	* Starting control plane node stopped-upgrade-200000 in cluster stopped-upgrade-200000
	* Restarting existing hyperkit VM for "stopped-upgrade-200000" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 10:46:47.832488    5081 out.go:296] Setting OutFile to fd 1 ...
	I1208 10:46:47.832777    5081 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 10:46:47.832782    5081 out.go:309] Setting ErrFile to fd 2...
	I1208 10:46:47.832786    5081 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 10:46:47.832974    5081 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17738-1113/.minikube/bin
	I1208 10:46:47.834463    5081 out.go:303] Setting JSON to false
	I1208 10:46:47.857085    5081 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2763,"bootTime":1702058444,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1208 10:46:47.857195    5081 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1208 10:46:47.878609    5081 out.go:177] * [stopped-upgrade-200000] minikube v1.32.0 on Darwin 14.1.2
	I1208 10:46:47.920214    5081 out.go:177]   - MINIKUBE_LOCATION=17738
	I1208 10:46:47.920252    5081 notify.go:220] Checking for updates...
	I1208 10:46:47.962354    5081 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17738-1113/kubeconfig
	I1208 10:46:48.005376    5081 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1208 10:46:48.046353    5081 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 10:46:48.088174    5081 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17738-1113/.minikube
	I1208 10:46:48.130312    5081 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 10:46:48.151588    5081 config.go:182] Loaded profile config "stopped-upgrade-200000": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1208 10:46:48.151608    5081 start_flags.go:694] config upgrade: Driver=hyperkit
	I1208 10:46:48.151616    5081 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0
	I1208 10:46:48.151672    5081 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/stopped-upgrade-200000/config.json ...
	I1208 10:46:48.152442    5081 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:46:48.152509    5081 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1208 10:46:48.161276    5081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52582
	I1208 10:46:48.161687    5081 main.go:141] libmachine: () Calling .GetVersion
	I1208 10:46:48.162154    5081 main.go:141] libmachine: Using API Version  1
	I1208 10:46:48.162166    5081 main.go:141] libmachine: () Calling .SetConfigRaw
	I1208 10:46:48.162449    5081 main.go:141] libmachine: () Calling .GetMachineName
	I1208 10:46:48.162579    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .DriverName
	I1208 10:46:48.185465    5081 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1208 10:46:48.206382    5081 driver.go:392] Setting default libvirt URI to qemu:///system
	I1208 10:46:48.206723    5081 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:46:48.206759    5081 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1208 10:46:48.215351    5081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52584
	I1208 10:46:48.215747    5081 main.go:141] libmachine: () Calling .GetVersion
	I1208 10:46:48.216122    5081 main.go:141] libmachine: Using API Version  1
	I1208 10:46:48.216144    5081 main.go:141] libmachine: () Calling .SetConfigRaw
	I1208 10:46:48.216381    5081 main.go:141] libmachine: () Calling .GetMachineName
	I1208 10:46:48.216495    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .DriverName
	I1208 10:46:48.245139    5081 out.go:177] * Using the hyperkit driver based on existing profile
	I1208 10:46:48.287465    5081 start.go:298] selected driver: hyperkit
	I1208 10:46:48.287479    5081 start.go:902] validating driver "hyperkit" against &{Name:stopped-upgrade-200000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperkit Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v
1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.169.0.28 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1208 10:46:48.287593    5081 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 10:46:48.290577    5081 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 10:46:48.290688    5081 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17738-1113/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1208 10:46:48.298624    5081 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1208 10:46:48.302428    5081 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:46:48.302451    5081 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1208 10:46:48.302567    5081 cni.go:84] Creating CNI manager for ""
	I1208 10:46:48.302584    5081 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1208 10:46:48.302591    5081 start_flags.go:323] config:
	{Name:stopped-upgrade-200000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperkit Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.169.0.28 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1208 10:46:48.302754    5081 iso.go:125] acquiring lock: {Name:mk933f5286cca8a935e46c54218c5cced15285e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 10:46:48.344373    5081 out.go:177] * Starting control plane node stopped-upgrade-200000 in cluster stopped-upgrade-200000
	I1208 10:46:48.365359    5081 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	W1208 10:46:48.429272    5081 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1208 10:46:48.429367    5081 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/stopped-upgrade-200000/config.json ...
	I1208 10:46:48.429425    5081 cache.go:107] acquiring lock: {Name:mk190e9e2cb818b3bc714afbb9d5041a5bfb203c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 10:46:48.429442    5081 cache.go:107] acquiring lock: {Name:mke6caf354c981f64489b802026809f89c73511a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 10:46:48.429456    5081 cache.go:107] acquiring lock: {Name:mk0b51e14dcc8733de45bbb64a98428de2d77a4e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 10:46:48.429531    5081 cache.go:115] /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1208 10:46:48.429546    5081 cache.go:115] /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1208 10:46:48.429549    5081 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17738-1113/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 126.962µs
	I1208 10:46:48.429558    5081 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/Users/jenkins/minikube-integration/17738-1113/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 133.315µs
	I1208 10:46:48.429561    5081 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1208 10:46:48.429567    5081 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1208 10:46:48.429575    5081 cache.go:115] /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1208 10:46:48.429586    5081 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/Users/jenkins/minikube-integration/17738-1113/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 149.144µs
	I1208 10:46:48.429596    5081 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1208 10:46:48.429560    5081 cache.go:107] acquiring lock: {Name:mk7f224923d4dd0594da6e903217dcffa094d663 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 10:46:48.429601    5081 cache.go:107] acquiring lock: {Name:mke0cbb28df81883f4a3ab659e5390c8f848a27b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 10:46:48.429620    5081 cache.go:107] acquiring lock: {Name:mke78793faacc91da63221e9b88ce2bc07c0189e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 10:46:48.429635    5081 cache.go:107] acquiring lock: {Name:mk32a12829b90f4dd9181a8499f42fc9606e1771 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 10:46:48.429661    5081 cache.go:107] acquiring lock: {Name:mkd23ef1bfc3f234cd7706896f40b82e4291d887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 10:46:48.429718    5081 cache.go:115] /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1208 10:46:48.429728    5081 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/Users/jenkins/minikube-integration/17738-1113/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 196.408µs
	I1208 10:46:48.429736    5081 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1208 10:46:48.429740    5081 cache.go:115] /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1208 10:46:48.429752    5081 cache.go:115] /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1208 10:46:48.429756    5081 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/Users/jenkins/minikube-integration/17738-1113/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 225.256µs
	I1208 10:46:48.429765    5081 cache.go:115] /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1208 10:46:48.429764    5081 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/Users/jenkins/minikube-integration/17738-1113/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 186.1µs
	I1208 10:46:48.429769    5081 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1208 10:46:48.429775    5081 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1208 10:46:48.429775    5081 cache.go:115] /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1208 10:46:48.429776    5081 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/17738-1113/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 214.222µs
	I1208 10:46:48.429804    5081 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1208 10:46:48.429787    5081 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/Users/jenkins/minikube-integration/17738-1113/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 164.453µs
	I1208 10:46:48.429814    5081 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1208 10:46:48.429822    5081 cache.go:87] Successfully saved all images to host disk.
	I1208 10:46:48.429890    5081 start.go:365] acquiring machines lock for stopped-upgrade-200000: {Name:mkf6539d901e554b062746e761b420c8557e3211 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1208 10:46:48.429944    5081 start.go:369] acquired machines lock for "stopped-upgrade-200000" in 43.73µs
	I1208 10:46:48.429961    5081 start.go:96] Skipping create...Using existing machine configuration
	I1208 10:46:48.429970    5081 fix.go:54] fixHost starting: minikube
	I1208 10:46:48.430186    5081 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:46:48.430204    5081 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1208 10:46:48.438282    5081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52586
	I1208 10:46:48.438635    5081 main.go:141] libmachine: () Calling .GetVersion
	I1208 10:46:48.439019    5081 main.go:141] libmachine: Using API Version  1
	I1208 10:46:48.439036    5081 main.go:141] libmachine: () Calling .SetConfigRaw
	I1208 10:46:48.439244    5081 main.go:141] libmachine: () Calling .GetMachineName
	I1208 10:46:48.439355    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .DriverName
	I1208 10:46:48.439460    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetState
	I1208 10:46:48.439547    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:46:48.439615    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | hyperkit pid from json: 4964
	I1208 10:46:48.440535    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | hyperkit pid 4964 missing from process table
	I1208 10:46:48.440556    5081 fix.go:102] recreateIfNeeded on stopped-upgrade-200000: state=Stopped err=<nil>
	I1208 10:46:48.440571    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .DriverName
	W1208 10:46:48.440658    5081 fix.go:128] unexpected machine state, will restart: <nil>
	I1208 10:46:48.482398    5081 out.go:177] * Restarting existing hyperkit VM for "stopped-upgrade-200000" ...
	I1208 10:46:48.503304    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .Start
	I1208 10:46:48.503451    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:46:48.503483    5081 main.go:141] libmachine: (stopped-upgrade-200000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/hyperkit.pid
	I1208 10:46:48.504497    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | hyperkit pid 4964 missing from process table
	I1208 10:46:48.504512    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | pid 4964 is in state "Stopped"
	I1208 10:46:48.504525    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/hyperkit.pid...
	I1208 10:46:48.504708    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | Using UUID ed2da600-95f9-11ee-b2ba-f01898ef957c
	I1208 10:46:48.528610    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | Generated MAC 4e:72:4c:b1:5e:d5
	I1208 10:46:48.528632    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=stopped-upgrade-200000
	I1208 10:46:48.528791    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | 2023/12/08 10:46:48 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ed2da600-95f9-11ee-b2ba-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000468db0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil),
CmdLine:"", process:(*os.Process)(nil)}
	I1208 10:46:48.528825    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | 2023/12/08 10:46:48 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ed2da600-95f9-11ee-b2ba-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000468db0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil),
CmdLine:"", process:(*os.Process)(nil)}
	I1208 10:46:48.528883    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | 2023/12/08 10:46:48 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ed2da600-95f9-11ee-b2ba-f01898ef957c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/stopped-upgrade-200000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/tty,log=/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/
bzimage,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=stopped-upgrade-200000"}
	I1208 10:46:48.528924    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | 2023/12/08 10:46:48 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ed2da600-95f9-11ee-b2ba-f01898ef957c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/stopped-upgrade-200000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/tty,log=/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/console-ring -f kexec,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/bzimage,/Users/jenkins/minikube-integration/17738-1113/.miniku
be/machines/stopped-upgrade-200000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=stopped-upgrade-200000"
	I1208 10:46:48.528948    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | 2023/12/08 10:46:48 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1208 10:46:48.530296    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | 2023/12/08 10:46:48 DEBUG: hyperkit: Pid is 5092
	I1208 10:46:48.530818    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | Attempt 0
	I1208 10:46:48.530838    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:46:48.530898    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | hyperkit pid from json: 5092
	I1208 10:46:48.532559    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | Searching for 4e:72:4c:b1:5e:d5 in /var/db/dhcpd_leases ...
	I1208 10:46:48.532677    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | Found 27 entries in /var/db/dhcpd_leases!
	I1208 10:46:48.532707    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6:46:60:8d:22:4b ID:1,6:46:60:8d:22:4b Lease:0x6574b60c}
	I1208 10:46:48.532741    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:4e:72:4c:b1:5e:d5 ID:1,4e:72:4c:b1:5e:d5 Lease:0x6574b5cd}
	I1208 10:46:48.532752    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | Found match: 4e:72:4c:b1:5e:d5
	I1208 10:46:48.532761    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | IP: 192.169.0.28
	I1208 10:46:48.532777    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetConfigRaw
	I1208 10:46:48.533507    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetIP
	I1208 10:46:48.533693    5081 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/stopped-upgrade-200000/config.json ...
	I1208 10:46:48.534117    5081 machine.go:88] provisioning docker machine ...
	I1208 10:46:48.534128    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .DriverName
	I1208 10:46:48.534234    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetMachineName
	I1208 10:46:48.534346    5081 buildroot.go:166] provisioning hostname "stopped-upgrade-200000"
	I1208 10:46:48.534358    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetMachineName
	I1208 10:46:48.534456    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHHostname
	I1208 10:46:48.534559    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHPort
	I1208 10:46:48.534669    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHKeyPath
	I1208 10:46:48.534779    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHKeyPath
	I1208 10:46:48.534880    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHUsername
	I1208 10:46:48.535019    5081 main.go:141] libmachine: Using SSH client type: native
	I1208 10:46:48.535343    5081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.28 22 <nil> <nil>}
	I1208 10:46:48.535353    5081 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-200000 && echo "stopped-upgrade-200000" | sudo tee /etc/hostname
	I1208 10:46:48.538266    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | 2023/12/08 10:46:48 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1208 10:46:48.546640    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | 2023/12/08 10:46:48 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1208 10:46:48.547828    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | 2023/12/08 10:46:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1208 10:46:48.547868    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | 2023/12/08 10:46:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1208 10:46:48.547884    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | 2023/12/08 10:46:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1208 10:46:48.547898    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | 2023/12/08 10:46:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1208 10:46:48.941427    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | 2023/12/08 10:46:48 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1208 10:46:49.046894    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | 2023/12/08 10:46:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1208 10:46:49.046929    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | 2023/12/08 10:46:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1208 10:46:49.046947    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | 2023/12/08 10:46:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1208 10:46:49.047015    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | 2023/12/08 10:46:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1208 10:46:49.047763    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | 2023/12/08 10:46:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1208 10:47:03.584707    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | 2023/12/08 10:47:03 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1208 10:47:03.584721    5081 main.go:141] libmachine: (stopped-upgrade-200000) DBG | 2023/12/08 10:47:03 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1208 10:47:08.115776    5081 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-200000
	
	I1208 10:47:08.115795    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHHostname
	I1208 10:47:08.115932    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHPort
	I1208 10:47:08.116030    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHKeyPath
	I1208 10:47:08.116130    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHKeyPath
	I1208 10:47:08.116211    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHUsername
	I1208 10:47:08.116347    5081 main.go:141] libmachine: Using SSH client type: native
	I1208 10:47:08.116592    5081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.28 22 <nil> <nil>}
	I1208 10:47:08.116605    5081 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-200000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-200000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-200000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 10:47:08.183197    5081 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1208 10:47:08.183220    5081 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17738-1113/.minikube CaCertPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17738-1113/.minikube}
	I1208 10:47:08.183239    5081 buildroot.go:174] setting up certificates
	I1208 10:47:08.183252    5081 provision.go:83] configureAuth start
	I1208 10:47:08.183261    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetMachineName
	I1208 10:47:08.183395    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetIP
	I1208 10:47:08.183488    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHHostname
	I1208 10:47:08.183581    5081 provision.go:138] copyHostCerts
	I1208 10:47:08.183659    5081 exec_runner.go:144] found /Users/jenkins/minikube-integration/17738-1113/.minikube/ca.pem, removing ...
	I1208 10:47:08.183669    5081 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17738-1113/.minikube/ca.pem
	I1208 10:47:08.184580    5081 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17738-1113/.minikube/ca.pem (1078 bytes)
	I1208 10:47:08.184798    5081 exec_runner.go:144] found /Users/jenkins/minikube-integration/17738-1113/.minikube/cert.pem, removing ...
	I1208 10:47:08.184805    5081 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17738-1113/.minikube/cert.pem
	I1208 10:47:08.185298    5081 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17738-1113/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17738-1113/.minikube/cert.pem (1123 bytes)
	I1208 10:47:08.185491    5081 exec_runner.go:144] found /Users/jenkins/minikube-integration/17738-1113/.minikube/key.pem, removing ...
	I1208 10:47:08.185498    5081 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17738-1113/.minikube/key.pem
	I1208 10:47:08.185704    5081 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17738-1113/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17738-1113/.minikube/key.pem (1679 bytes)
	I1208 10:47:08.185859    5081 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-200000 san=[192.169.0.28 192.169.0.28 localhost 127.0.0.1 minikube stopped-upgrade-200000]
	I1208 10:47:08.270616    5081 provision.go:172] copyRemoteCerts
	I1208 10:47:08.270677    5081 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 10:47:08.270693    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHHostname
	I1208 10:47:08.270817    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHPort
	I1208 10:47:08.270920    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHKeyPath
	I1208 10:47:08.271023    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHUsername
	I1208 10:47:08.271117    5081 sshutil.go:53] new ssh client: &{IP:192.169.0.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/id_rsa Username:docker}
	I1208 10:47:08.306239    5081 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 10:47:08.314931    5081 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1208 10:47:08.323662    5081 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 10:47:08.332217    5081 provision.go:86] duration metric: configureAuth took 148.956116ms
	I1208 10:47:08.332228    5081 buildroot.go:189] setting minikube options for container-runtime
	I1208 10:47:08.332345    5081 config.go:182] Loaded profile config "stopped-upgrade-200000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1208 10:47:08.332358    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .DriverName
	I1208 10:47:08.332484    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHHostname
	I1208 10:47:08.332575    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHPort
	I1208 10:47:08.332661    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHKeyPath
	I1208 10:47:08.332743    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHKeyPath
	I1208 10:47:08.332838    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHUsername
	I1208 10:47:08.332939    5081 main.go:141] libmachine: Using SSH client type: native
	I1208 10:47:08.333173    5081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.28 22 <nil> <nil>}
	I1208 10:47:08.333182    5081 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1208 10:47:08.396657    5081 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1208 10:47:08.396669    5081 buildroot.go:70] root file system type: tmpfs
	I1208 10:47:08.396753    5081 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1208 10:47:08.396767    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHHostname
	I1208 10:47:08.396896    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHPort
	I1208 10:47:08.396996    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHKeyPath
	I1208 10:47:08.397089    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHKeyPath
	I1208 10:47:08.397187    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHUsername
	I1208 10:47:08.397308    5081 main.go:141] libmachine: Using SSH client type: native
	I1208 10:47:08.397543    5081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.28 22 <nil> <nil>}
	I1208 10:47:08.397588    5081 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1208 10:47:08.465893    5081 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1208 10:47:08.465950    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHHostname
	I1208 10:47:08.466085    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHPort
	I1208 10:47:08.466191    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHKeyPath
	I1208 10:47:08.466315    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHKeyPath
	I1208 10:47:08.466400    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHUsername
	I1208 10:47:08.466547    5081 main.go:141] libmachine: Using SSH client type: native
	I1208 10:47:08.466796    5081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.28 22 <nil> <nil>}
	I1208 10:47:08.466809    5081 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1208 10:47:08.930935    5081 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1208 10:47:08.930956    5081 machine.go:91] provisioned docker machine in 20.397087524s
	I1208 10:47:08.930970    5081 start.go:300] post-start starting for "stopped-upgrade-200000" (driver="hyperkit")
	I1208 10:47:08.930981    5081 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 10:47:08.930995    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .DriverName
	I1208 10:47:08.931192    5081 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 10:47:08.931211    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHHostname
	I1208 10:47:08.931306    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHPort
	I1208 10:47:08.931426    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHKeyPath
	I1208 10:47:08.931510    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHUsername
	I1208 10:47:08.931598    5081 sshutil.go:53] new ssh client: &{IP:192.169.0.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/id_rsa Username:docker}
	I1208 10:47:08.966631    5081 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 10:47:08.969283    5081 info.go:137] Remote host: Buildroot 2019.02.7
	I1208 10:47:08.969296    5081 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17738-1113/.minikube/addons for local assets ...
	I1208 10:47:08.969377    5081 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17738-1113/.minikube/files for local assets ...
	I1208 10:47:08.969514    5081 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17738-1113/.minikube/files/etc/ssl/certs/15852.pem -> 15852.pem in /etc/ssl/certs
	I1208 10:47:08.969660    5081 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 10:47:08.973294    5081 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17738-1113/.minikube/files/etc/ssl/certs/15852.pem --> /etc/ssl/certs/15852.pem (1708 bytes)
	I1208 10:47:08.982236    5081 start.go:303] post-start completed in 51.257543ms
	I1208 10:47:08.982249    5081 fix.go:56] fixHost completed within 20.552540424s
	I1208 10:47:08.982305    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHHostname
	I1208 10:47:08.982440    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHPort
	I1208 10:47:08.982536    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHKeyPath
	I1208 10:47:08.982617    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHKeyPath
	I1208 10:47:08.982692    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHUsername
	I1208 10:47:08.982806    5081 main.go:141] libmachine: Using SSH client type: native
	I1208 10:47:08.983043    5081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.28 22 <nil> <nil>}
	I1208 10:47:08.983053    5081 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1208 10:47:09.046700    5081 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702061228.064387520
	
	I1208 10:47:09.046715    5081 fix.go:206] guest clock: 1702061228.064387520
	I1208 10:47:09.046721    5081 fix.go:219] Guest: 2023-12-08 10:47:08.06438752 -0800 PST Remote: 2023-12-08 10:47:08.982252 -0800 PST m=+21.194817741 (delta=-917.86448ms)
	I1208 10:47:09.046740    5081 fix.go:190] guest clock delta is within tolerance: -917.86448ms
	I1208 10:47:09.046744    5081 start.go:83] releasing machines lock for "stopped-upgrade-200000", held for 20.617053188s
	I1208 10:47:09.046764    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .DriverName
	I1208 10:47:09.046924    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetIP
	I1208 10:47:09.047029    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .DriverName
	I1208 10:47:09.047332    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .DriverName
	I1208 10:47:09.047432    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .DriverName
	I1208 10:47:09.047489    5081 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 10:47:09.047527    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHHostname
	I1208 10:47:09.047600    5081 ssh_runner.go:195] Run: cat /version.json
	I1208 10:47:09.047623    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHPort
	I1208 10:47:09.047635    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHHostname
	I1208 10:47:09.047730    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHKeyPath
	I1208 10:47:09.047749    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHPort
	I1208 10:47:09.047824    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHUsername
	I1208 10:47:09.047845    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHKeyPath
	I1208 10:47:09.047919    5081 sshutil.go:53] new ssh client: &{IP:192.169.0.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/id_rsa Username:docker}
	I1208 10:47:09.047934    5081 main.go:141] libmachine: (stopped-upgrade-200000) Calling .GetSSHUsername
	I1208 10:47:09.048028    5081 sshutil.go:53] new ssh client: &{IP:192.169.0.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/stopped-upgrade-200000/id_rsa Username:docker}
	W1208 10:47:09.136345    5081 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1208 10:47:09.136433    5081 ssh_runner.go:195] Run: systemctl --version
	I1208 10:47:09.139805    5081 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 10:47:09.143341    5081 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 10:47:09.143399    5081 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1208 10:47:09.146804    5081 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1208 10:47:09.150029    5081 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I1208 10:47:09.150045    5081 start.go:475] detecting cgroup driver to use...
	I1208 10:47:09.150140    5081 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 10:47:09.157853    5081 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I1208 10:47:09.161932    5081 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 10:47:09.165937    5081 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 10:47:09.165979    5081 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 10:47:09.169951    5081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 10:47:09.174019    5081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 10:47:09.178131    5081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 10:47:09.182347    5081 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 10:47:09.186992    5081 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 10:47:09.191446    5081 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 10:47:09.195068    5081 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 10:47:09.198718    5081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 10:47:09.261399    5081 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 10:47:09.273593    5081 start.go:475] detecting cgroup driver to use...
	I1208 10:47:09.273665    5081 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1208 10:47:09.282216    5081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 10:47:09.289275    5081 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 10:47:09.311977    5081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 10:47:09.319946    5081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 10:47:09.328077    5081 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 10:47:09.335808    5081 ssh_runner.go:195] Run: which cri-dockerd
	I1208 10:47:09.338020    5081 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1208 10:47:09.341903    5081 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1208 10:47:09.348499    5081 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1208 10:47:09.416392    5081 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1208 10:47:09.483723    5081 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1208 10:47:09.483809    5081 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1208 10:47:09.490640    5081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 10:47:09.554668    5081 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1208 10:47:10.593525    5081 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.038848593s)
	I1208 10:47:10.593595    5081 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1208 10:47:10.640394    5081 out.go:177] 
	W1208 10:47:10.676999    5081 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Logs begin at Fri 2023-12-08 18:47:04 UTC, end at Fri 2023-12-08 18:47:09 UTC. --
	Dec 08 18:47:07 stopped-upgrade-200000 systemd[1]: Starting Docker Application Container Engine...
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.704062043Z" level=info msg="Starting up"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.705066323Z" level=info msg="libcontainerd: started new containerd process" pid=1858
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.705153599Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.705162262Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.705175732Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.705185932Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.722738167Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.722943545Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.722992158Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.723129019Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.723161814Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.723960656Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.723971301Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.723999210Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724079051Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724183831Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724214168Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724230029Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724235222Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724238690Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724528675Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724564352Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724584011Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724591654Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724598267Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724605456Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724612097Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724618627Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724624842Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724631141Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724684070Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724716270Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724909822Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724947298Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724974755Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724987042Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724993615Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724999689Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.725005412Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.725038907Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.725049661Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.725056144Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.725062096Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.725089933Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.725098374Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.725104670Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.725110891Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.725179239Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.725229657Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.725237675Z" level=info msg="containerd successfully booted in 0.002797s"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.728574752Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.728613933Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.728630173Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.728639156Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.731363157Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.731428273Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.731477645Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.731518646Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.734954881Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.754478299Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.754513439Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.754520853Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.754525222Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.754529268Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.754533808Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.754654080Z" level=info msg="Loading containers: start."
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.779569857Z" level=warning msg="d8d4beaf0fa8e35e2efd33ed45ce20970f20eade9ad9c5962075c0cd43c72aae cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d8d4beaf0fa8e35e2efd33ed45ce20970f20eade9ad9c5962075c0cd43c72aae/mounts/shm, flags: 0x2: no such file or directory"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.780199171Z" level=warning msg="fd7cd9e98f17499fbd921b05befa4ed17ced3cf4c66c8c5798661658b2248412 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/fd7cd9e98f17499fbd921b05befa4ed17ced3cf4c66c8c5798661658b2248412/mounts/shm, flags: 0x2: no such file or directory"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.780786692Z" level=error msg="fd7cd9e98f17499fbd921b05befa4ed17ced3cf4c66c8c5798661658b2248412 cleanup: failed to delete container from containerd: no such container"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.781256827Z" level=warning msg="fe8b93941a25b5252dcce844675b3423b2de61080ccc2c2e67d039849b477db3 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/fe8b93941a25b5252dcce844675b3423b2de61080ccc2c2e67d039849b477db3/mounts/shm, flags: 0x2: no such file or directory"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.889921330Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.920312247Z" level=info msg="Loading containers: done."
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.934888944Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.934982462Z" level=info msg="Daemon has completed initialization"
	Dec 08 18:47:07 stopped-upgrade-200000 systemd[1]: Started Docker Application Container Engine.
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.949218905Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.949379364Z" level=info msg="API listen on [::]:2376"
	Dec 08 18:47:08 stopped-upgrade-200000 systemd[1]: Stopping Docker Application Container Engine...
	Dec 08 18:47:08 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:08.578767571Z" level=info msg="Processing signal 'terminated'"
	Dec 08 18:47:08 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:08.579274143Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 08 18:47:08 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:08.579590347Z" level=info msg="Daemon shutdown complete"
	Dec 08 18:47:08 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:08.579610749Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 08 18:47:08 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:08.579621782Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 08 18:47:09 stopped-upgrade-200000 systemd[1]: docker.service: Succeeded.
	Dec 08 18:47:09 stopped-upgrade-200000 systemd[1]: Stopped Docker Application Container Engine.
	Dec 08 18:47:09 stopped-upgrade-200000 systemd[1]: Starting Docker Application Container Engine...
	Dec 08 18:47:09 stopped-upgrade-200000 dockerd[2142]: time="2023-12-08T18:47:09.607842962Z" level=info msg="Starting up"
	Dec 08 18:47:09 stopped-upgrade-200000 dockerd[2142]: time="2023-12-08T18:47:09.609388145Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 08 18:47:09 stopped-upgrade-200000 dockerd[2142]: time="2023-12-08T18:47:09.609459125Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 08 18:47:09 stopped-upgrade-200000 dockerd[2142]: time="2023-12-08T18:47:09.609504852Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 08 18:47:09 stopped-upgrade-200000 dockerd[2142]: time="2023-12-08T18:47:09.609545575Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 08 18:47:09 stopped-upgrade-200000 dockerd[2142]: time="2023-12-08T18:47:09.609740912Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Dec 08 18:47:09 stopped-upgrade-200000 dockerd[2142]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused": unavailable
	Dec 08 18:47:09 stopped-upgrade-200000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 18:47:09 stopped-upgrade-200000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 08 18:47:09 stopped-upgrade-200000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Logs begin at Fri 2023-12-08 18:47:04 UTC, end at Fri 2023-12-08 18:47:09 UTC. --
	Dec 08 18:47:07 stopped-upgrade-200000 systemd[1]: Starting Docker Application Container Engine...
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.704062043Z" level=info msg="Starting up"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.705066323Z" level=info msg="libcontainerd: started new containerd process" pid=1858
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.705153599Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.705162262Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.705175732Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.705185932Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.722738167Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.722943545Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.722992158Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.723129019Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.723161814Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.723960656Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.723971301Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.723999210Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724079051Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724183831Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724214168Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724230029Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724235222Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724238690Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724528675Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724564352Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724584011Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724591654Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724598267Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724605456Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724612097Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724618627Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724624842Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724631141Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724684070Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724716270Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724909822Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724947298Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724974755Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724987042Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724993615Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.724999689Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.725005412Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.725038907Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.725049661Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.725056144Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.725062096Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.725089933Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.725098374Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.725104670Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.725110891Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.725179239Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.725229657Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.725237675Z" level=info msg="containerd successfully booted in 0.002797s"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.728574752Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.728613933Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.728630173Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.728639156Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.731363157Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.731428273Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.731477645Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.731518646Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.734954881Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.754478299Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.754513439Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.754520853Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.754525222Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.754529268Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.754533808Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.754654080Z" level=info msg="Loading containers: start."
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.779569857Z" level=warning msg="d8d4beaf0fa8e35e2efd33ed45ce20970f20eade9ad9c5962075c0cd43c72aae cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d8d4beaf0fa8e35e2efd33ed45ce20970f20eade9ad9c5962075c0cd43c72aae/mounts/shm, flags: 0x2: no such file or directory"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.780199171Z" level=warning msg="fd7cd9e98f17499fbd921b05befa4ed17ced3cf4c66c8c5798661658b2248412 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/fd7cd9e98f17499fbd921b05befa4ed17ced3cf4c66c8c5798661658b2248412/mounts/shm, flags: 0x2: no such file or directory"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.780786692Z" level=error msg="fd7cd9e98f17499fbd921b05befa4ed17ced3cf4c66c8c5798661658b2248412 cleanup: failed to delete container from containerd: no such container"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.781256827Z" level=warning msg="fe8b93941a25b5252dcce844675b3423b2de61080ccc2c2e67d039849b477db3 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/fe8b93941a25b5252dcce844675b3423b2de61080ccc2c2e67d039849b477db3/mounts/shm, flags: 0x2: no such file or directory"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.889921330Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.920312247Z" level=info msg="Loading containers: done."
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.934888944Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.934982462Z" level=info msg="Daemon has completed initialization"
	Dec 08 18:47:07 stopped-upgrade-200000 systemd[1]: Started Docker Application Container Engine.
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.949218905Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 08 18:47:07 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:07.949379364Z" level=info msg="API listen on [::]:2376"
	Dec 08 18:47:08 stopped-upgrade-200000 systemd[1]: Stopping Docker Application Container Engine...
	Dec 08 18:47:08 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:08.578767571Z" level=info msg="Processing signal 'terminated'"
	Dec 08 18:47:08 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:08.579274143Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 08 18:47:08 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:08.579590347Z" level=info msg="Daemon shutdown complete"
	Dec 08 18:47:08 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:08.579610749Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 08 18:47:08 stopped-upgrade-200000 dockerd[1851]: time="2023-12-08T18:47:08.579621782Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 08 18:47:09 stopped-upgrade-200000 systemd[1]: docker.service: Succeeded.
	Dec 08 18:47:09 stopped-upgrade-200000 systemd[1]: Stopped Docker Application Container Engine.
	Dec 08 18:47:09 stopped-upgrade-200000 systemd[1]: Starting Docker Application Container Engine...
	Dec 08 18:47:09 stopped-upgrade-200000 dockerd[2142]: time="2023-12-08T18:47:09.607842962Z" level=info msg="Starting up"
	Dec 08 18:47:09 stopped-upgrade-200000 dockerd[2142]: time="2023-12-08T18:47:09.609388145Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 08 18:47:09 stopped-upgrade-200000 dockerd[2142]: time="2023-12-08T18:47:09.609459125Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 08 18:47:09 stopped-upgrade-200000 dockerd[2142]: time="2023-12-08T18:47:09.609504852Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 08 18:47:09 stopped-upgrade-200000 dockerd[2142]: time="2023-12-08T18:47:09.609545575Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 08 18:47:09 stopped-upgrade-200000 dockerd[2142]: time="2023-12-08T18:47:09.609740912Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Dec 08 18:47:09 stopped-upgrade-200000 dockerd[2142]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused": unavailable
	Dec 08 18:47:09 stopped-upgrade-200000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 18:47:09 stopped-upgrade-200000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 08 18:47:09 stopped-upgrade-200000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1208 10:47:10.677064    5081 out.go:239] * 
	* 
	W1208 10:47:10.677739    5081 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 10:47:10.755942    5081 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-darwin-amd64 start -p stopped-upgrade-200000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (115.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (15.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p calico-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : exit status 90 (15.403692304s)

                                                
                                                
-- stdout --
	* [calico-387000] minikube v1.32.0 on Darwin 14.1.2
	  - MINIKUBE_LOCATION=17738
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17738-1113/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17738-1113/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node calico-387000 in cluster calico-387000
	* Creating hyperkit VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 10:50:10.342342    5593 out.go:296] Setting OutFile to fd 1 ...
	I1208 10:50:10.342648    5593 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 10:50:10.342655    5593 out.go:309] Setting ErrFile to fd 2...
	I1208 10:50:10.342659    5593 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 10:50:10.342846    5593 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17738-1113/.minikube/bin
	I1208 10:50:10.344458    5593 out.go:303] Setting JSON to false
	I1208 10:50:10.371343    5593 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2966,"bootTime":1702058444,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1208 10:50:10.371463    5593 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1208 10:50:10.429258    5593 out.go:177] * [calico-387000] minikube v1.32.0 on Darwin 14.1.2
	I1208 10:50:10.502621    5593 out.go:177]   - MINIKUBE_LOCATION=17738
	I1208 10:50:10.478285    5593 notify.go:220] Checking for updates...
	I1208 10:50:10.545286    5593 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17738-1113/kubeconfig
	I1208 10:50:10.593397    5593 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1208 10:50:10.614601    5593 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 10:50:10.636232    5593 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17738-1113/.minikube
	I1208 10:50:10.678363    5593 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 10:50:10.699831    5593 config.go:182] Loaded profile config "kindnet-387000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1208 10:50:10.699980    5593 driver.go:392] Setting default libvirt URI to qemu:///system
	I1208 10:50:10.728287    5593 out.go:177] * Using the hyperkit driver based on user configuration
	I1208 10:50:10.786348    5593 start.go:298] selected driver: hyperkit
	I1208 10:50:10.786377    5593 start.go:902] validating driver "hyperkit" against <nil>
	I1208 10:50:10.786398    5593 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 10:50:10.790860    5593 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 10:50:10.790981    5593 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17738-1113/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1208 10:50:10.798782    5593 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1208 10:50:10.802679    5593 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:50:10.802703    5593 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1208 10:50:10.802732    5593 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1208 10:50:10.802949    5593 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 10:50:10.803012    5593 cni.go:84] Creating CNI manager for "calico"
	I1208 10:50:10.803023    5593 start_flags.go:318] Found "Calico" CNI - setting NetworkPlugin=cni
	I1208 10:50:10.803030    5593 start_flags.go:323] config:
	{Name:calico-387000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1208 10:50:10.803172    5593 iso.go:125] acquiring lock: {Name:mk933f5286cca8a935e46c54218c5cced15285e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 10:50:10.813475    5593 out.go:177] * Starting control plane node calico-387000 in cluster calico-387000
	I1208 10:50:10.836182    5593 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1208 10:50:10.836252    5593 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1208 10:50:10.836282    5593 cache.go:56] Caching tarball of preloaded images
	I1208 10:50:10.836475    5593 preload.go:174] Found /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1208 10:50:10.836496    5593 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1208 10:50:10.836636    5593 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/calico-387000/config.json ...
	I1208 10:50:10.836669    5593 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/calico-387000/config.json: {Name:mk3d9cfd0bd48f103df077b635f7d2a1dc9bf9d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 10:50:10.837238    5593 start.go:365] acquiring machines lock for calico-387000: {Name:mkf6539d901e554b062746e761b420c8557e3211 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1208 10:50:10.837306    5593 start.go:369] acquired machines lock for "calico-387000" in 54.768µs
	I1208 10:50:10.837335    5593 start.go:93] Provisioning new machine with config: &{Name:calico-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:calico-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1208 10:50:10.837400    5593 start.go:125] createHost starting for "" (driver="hyperkit")
	I1208 10:50:10.899318    5593 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1208 10:50:10.899703    5593 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:50:10.899773    5593 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1208 10:50:10.908511    5593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53299
	I1208 10:50:10.908899    5593 main.go:141] libmachine: () Calling .GetVersion
	I1208 10:50:10.909377    5593 main.go:141] libmachine: Using API Version  1
	I1208 10:50:10.909392    5593 main.go:141] libmachine: () Calling .SetConfigRaw
	I1208 10:50:10.909646    5593 main.go:141] libmachine: () Calling .GetMachineName
	I1208 10:50:10.909759    5593 main.go:141] libmachine: (calico-387000) Calling .GetMachineName
	I1208 10:50:10.909855    5593 main.go:141] libmachine: (calico-387000) Calling .DriverName
	I1208 10:50:10.909954    5593 start.go:159] libmachine.API.Create for "calico-387000" (driver="hyperkit")
	I1208 10:50:10.909982    5593 client.go:168] LocalClient.Create starting
	I1208 10:50:10.910025    5593 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca.pem
	I1208 10:50:10.910084    5593 main.go:141] libmachine: Decoding PEM data...
	I1208 10:50:10.910102    5593 main.go:141] libmachine: Parsing certificate...
	I1208 10:50:10.910163    5593 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17738-1113/.minikube/certs/cert.pem
	I1208 10:50:10.910199    5593 main.go:141] libmachine: Decoding PEM data...
	I1208 10:50:10.910213    5593 main.go:141] libmachine: Parsing certificate...
	I1208 10:50:10.910226    5593 main.go:141] libmachine: Running pre-create checks...
	I1208 10:50:10.910235    5593 main.go:141] libmachine: (calico-387000) Calling .PreCreateCheck
	I1208 10:50:10.910322    5593 main.go:141] libmachine: (calico-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:50:10.910484    5593 main.go:141] libmachine: (calico-387000) Calling .GetConfigRaw
	I1208 10:50:10.910888    5593 main.go:141] libmachine: Creating machine...
	I1208 10:50:10.910898    5593 main.go:141] libmachine: (calico-387000) Calling .Create
	I1208 10:50:10.910971    5593 main.go:141] libmachine: (calico-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:50:10.911142    5593 main.go:141] libmachine: (calico-387000) DBG | I1208 10:50:10.910967    5601 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/17738-1113/.minikube
	I1208 10:50:10.911193    5593 main.go:141] libmachine: (calico-387000) Downloading /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17738-1113/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso...
	I1208 10:50:11.080863    5593 main.go:141] libmachine: (calico-387000) DBG | I1208 10:50:11.080797    5601 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/id_rsa...
	I1208 10:50:11.304947    5593 main.go:141] libmachine: (calico-387000) DBG | I1208 10:50:11.304862    5601 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/calico-387000.rawdisk...
	I1208 10:50:11.304965    5593 main.go:141] libmachine: (calico-387000) DBG | Writing magic tar header
	I1208 10:50:11.304974    5593 main.go:141] libmachine: (calico-387000) DBG | Writing SSH key tar header
	I1208 10:50:11.305292    5593 main.go:141] libmachine: (calico-387000) DBG | I1208 10:50:11.305248    5601 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000 ...
	I1208 10:50:11.640728    5593 main.go:141] libmachine: (calico-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:50:11.640746    5593 main.go:141] libmachine: (calico-387000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/hyperkit.pid
	I1208 10:50:11.640783    5593 main.go:141] libmachine: (calico-387000) DBG | Using UUID 9d54978c-95fa-11ee-92a0-f01898ef957c
	I1208 10:50:11.668197    5593 main.go:141] libmachine: (calico-387000) DBG | Generated MAC 16:aa:5c:9:9d:be
	I1208 10:50:11.668215    5593 main.go:141] libmachine: (calico-387000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=calico-387000
	I1208 10:50:11.668247    5593 main.go:141] libmachine: (calico-387000) DBG | 2023/12/08 10:50:11 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9d54978c-95fa-11ee-92a0-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000182300)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1208 10:50:11.668278    5593 main.go:141] libmachine: (calico-387000) DBG | 2023/12/08 10:50:11 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9d54978c-95fa-11ee-92a0-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000182300)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1208 10:50:11.668353    5593 main.go:141] libmachine: (calico-387000) DBG | 2023/12/08 10:50:11 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/hyperkit.pid", "-c", "2", "-m", "3072M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9d54978c-95fa-11ee-92a0-f01898ef957c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/calico-387000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/tty,log=/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/bzimage,/Users/jenkins/minikube-integration/17738-1113/.minikube/machine
s/calico-387000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=calico-387000"}
	I1208 10:50:11.668394    5593 main.go:141] libmachine: (calico-387000) DBG | 2023/12/08 10:50:11 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/hyperkit.pid -c 2 -m 3072M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9d54978c-95fa-11ee-92a0-f01898ef957c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/calico-387000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/tty,log=/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/console-ring -f kexec,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/bzimage,/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=calico-387000"
	I1208 10:50:11.668406    5593 main.go:141] libmachine: (calico-387000) DBG | 2023/12/08 10:50:11 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1208 10:50:11.671538    5593 main.go:141] libmachine: (calico-387000) DBG | 2023/12/08 10:50:11 DEBUG: hyperkit: Pid is 5602
	I1208 10:50:11.671913    5593 main.go:141] libmachine: (calico-387000) DBG | Attempt 0
	I1208 10:50:11.671922    5593 main.go:141] libmachine: (calico-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:50:11.671992    5593 main.go:141] libmachine: (calico-387000) DBG | hyperkit pid from json: 5602
	I1208 10:50:11.673056    5593 main.go:141] libmachine: (calico-387000) DBG | Searching for 16:aa:5c:9:9d:be in /var/db/dhcpd_leases ...
	I1208 10:50:11.673150    5593 main.go:141] libmachine: (calico-387000) DBG | Found 32 entries in /var/db/dhcpd_leases!
	I1208 10:50:11.673168    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:d6:5e:3f:47:d:21 ID:1,d6:5e:3f:47:d:21 Lease:0x6574b6cc}
	I1208 10:50:11.673262    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:6a:df:ff:a:b0:c8 ID:1,6a:df:ff:a:b0:c8 Lease:0x6574b69c}
	I1208 10:50:11.673318    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:22:55:de:79:29:42 ID:1,22:55:de:79:29:42 Lease:0x65736541}
	I1208 10:50:11.673331    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:b6:dd:a9:ab:e:51 ID:1,b6:dd:a9:ab:e:51 Lease:0x657364fb}
	I1208 10:50:11.673348    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:ce:75:11:10:e2:e0 ID:1,ce:75:11:10:e2:e0 Lease:0x6574b63c}
	I1208 10:50:11.673365    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6:46:60:8d:22:4b ID:1,6:46:60:8d:22:4b Lease:0x657364c5}
	I1208 10:50:11.673387    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:4e:72:4c:b1:5e:d5 ID:1,4e:72:4c:b1:5e:d5 Lease:0x6574b62a}
	I1208 10:50:11.673399    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:7a:77:8e:23:b2:e ID:1,7a:77:8e:23:b2:e Lease:0x6574b51e}
	I1208 10:50:11.673415    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:86:e9:cf:16:9f:cb ID:1,86:e9:cf:16:9f:cb Lease:0x6574b4e8}
	I1208 10:50:11.673424    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:3a:ef:9c:ed:4:db ID:1,3a:ef:9c:ed:4:db Lease:0x6574b4cd}
	I1208 10:50:11.673432    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:fe:4d:d5:9f:24:c4 ID:1,fe:4d:d5:9f:24:c4 Lease:0x6574b4c0}
	I1208 10:50:11.673444    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:ba:cd:53:1f:22:ab ID:1,ba:cd:53:1f:22:ab Lease:0x6574b49e}
	I1208 10:50:11.673457    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ce:f5:b5:c4:fb:c8 ID:1,ce:f5:b5:c4:fb:c8 Lease:0x65736314}
	I1208 10:50:11.673468    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:be:e9:f9:29:2c:83 ID:1,be:e9:f9:29:2c:83 Lease:0x6574b462}
	I1208 10:50:11.673493    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:9a:b4:72:2b:86:59 ID:1,9a:b4:72:2b:86:59 Lease:0x6574b3f8}
	I1208 10:50:11.673508    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:f2:b9:4e:4a:8c:a6 ID:1,f2:b9:4e:4a:8c:a6 Lease:0x6574b38b}
	I1208 10:50:11.673519    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:22:82:7b:b2:ac:da ID:1,22:82:7b:b2:ac:da Lease:0x6574b356}
	I1208 10:50:11.673537    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:de:47:4b:25:2a:46 ID:1,de:47:4b:25:2a:46 Lease:0x6574b23c}
	I1208 10:50:11.673546    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:e2:24:15:fe:eb:dd ID:1,e2:24:15:fe:eb:dd Lease:0x65736031}
	I1208 10:50:11.673555    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:c:96:6c:84:bf ID:1,fa:c:96:6c:84:bf Lease:0x6574b200}
	I1208 10:50:11.673565    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:d3:91:19:be:63 ID:1,86:d3:91:19:be:63 Lease:0x6574b1cc}
	I1208 10:50:11.673573    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:6a:dd:20:0:d1:28 ID:1,6a:dd:20:0:d1:28 Lease:0x65735ea2}
	I1208 10:50:11.673582    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fe:85:2:44:e3:c ID:1,fe:85:2:44:e3:c Lease:0x65735e8d}
	I1208 10:50:11.673589    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:da:9c:f9:88:b3:17 ID:1,da:9c:f9:88:b3:17 Lease:0x6574afd8}
	I1208 10:50:11.673597    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:5a:d6:c2:1e:af:27 ID:1,5a:d6:c2:1e:af:27 Lease:0x6574afb3}
	I1208 10:50:11.673607    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:6:95:a1:20:d1:95 ID:1,6:95:a1:20:d1:95 Lease:0x6574af76}
	I1208 10:50:11.673616    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:6e:ce:da:98:ef:83 ID:1,6e:ce:da:98:ef:83 Lease:0x6574af04}
	I1208 10:50:11.673624    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e6:66:8f:7e:be:1b ID:1,e6:66:8f:7e:be:1b Lease:0x65735d6e}
	I1208 10:50:11.673633    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:a7:30:b6:e9:1e ID:1,1a:a7:30:b6:e9:1e Lease:0x6574ade8}
	I1208 10:50:11.673642    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:54:d0:4d:4d:3b ID:1,de:54:d0:4d:4d:3b Lease:0x6574adbc}
	I1208 10:50:11.673651    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:46:9f:cb:fd:ea:4f ID:1,46:9f:cb:fd:ea:4f Lease:0x6574ada8}
	I1208 10:50:11.673661    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x6574ad53}
	I1208 10:50:11.679288    5593 main.go:141] libmachine: (calico-387000) DBG | 2023/12/08 10:50:11 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1208 10:50:11.688411    5593 main.go:141] libmachine: (calico-387000) DBG | 2023/12/08 10:50:11 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1208 10:50:11.689275    5593 main.go:141] libmachine: (calico-387000) DBG | 2023/12/08 10:50:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1208 10:50:11.689296    5593 main.go:141] libmachine: (calico-387000) DBG | 2023/12/08 10:50:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1208 10:50:11.689318    5593 main.go:141] libmachine: (calico-387000) DBG | 2023/12/08 10:50:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1208 10:50:11.689339    5593 main.go:141] libmachine: (calico-387000) DBG | 2023/12/08 10:50:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1208 10:50:12.078245    5593 main.go:141] libmachine: (calico-387000) DBG | 2023/12/08 10:50:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1208 10:50:12.078262    5593 main.go:141] libmachine: (calico-387000) DBG | 2023/12/08 10:50:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1208 10:50:12.182470    5593 main.go:141] libmachine: (calico-387000) DBG | 2023/12/08 10:50:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1208 10:50:12.182490    5593 main.go:141] libmachine: (calico-387000) DBG | 2023/12/08 10:50:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1208 10:50:12.182511    5593 main.go:141] libmachine: (calico-387000) DBG | 2023/12/08 10:50:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1208 10:50:12.182531    5593 main.go:141] libmachine: (calico-387000) DBG | 2023/12/08 10:50:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1208 10:50:12.183273    5593 main.go:141] libmachine: (calico-387000) DBG | 2023/12/08 10:50:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1208 10:50:12.183286    5593 main.go:141] libmachine: (calico-387000) DBG | 2023/12/08 10:50:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1208 10:50:13.673756    5593 main.go:141] libmachine: (calico-387000) DBG | Attempt 1
	I1208 10:50:13.673778    5593 main.go:141] libmachine: (calico-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:50:13.673878    5593 main.go:141] libmachine: (calico-387000) DBG | hyperkit pid from json: 5602
	I1208 10:50:13.674746    5593 main.go:141] libmachine: (calico-387000) DBG | Searching for 16:aa:5c:9:9d:be in /var/db/dhcpd_leases ...
	I1208 10:50:13.674821    5593 main.go:141] libmachine: (calico-387000) DBG | Found 32 entries in /var/db/dhcpd_leases!
	I1208 10:50:13.674839    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:d6:5e:3f:47:d:21 ID:1,d6:5e:3f:47:d:21 Lease:0x6574b6cc}
	I1208 10:50:13.674869    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:6a:df:ff:a:b0:c8 ID:1,6a:df:ff:a:b0:c8 Lease:0x6574b69c}
	I1208 10:50:13.674899    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:22:55:de:79:29:42 ID:1,22:55:de:79:29:42 Lease:0x65736541}
	I1208 10:50:13.674913    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:b6:dd:a9:ab:e:51 ID:1,b6:dd:a9:ab:e:51 Lease:0x657364fb}
	I1208 10:50:13.675000    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:ce:75:11:10:e2:e0 ID:1,ce:75:11:10:e2:e0 Lease:0x6574b63c}
	I1208 10:50:13.675019    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6:46:60:8d:22:4b ID:1,6:46:60:8d:22:4b Lease:0x657364c5}
	I1208 10:50:13.675029    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:4e:72:4c:b1:5e:d5 ID:1,4e:72:4c:b1:5e:d5 Lease:0x6574b62a}
	I1208 10:50:13.675036    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:7a:77:8e:23:b2:e ID:1,7a:77:8e:23:b2:e Lease:0x6574b51e}
	I1208 10:50:13.675043    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:86:e9:cf:16:9f:cb ID:1,86:e9:cf:16:9f:cb Lease:0x6574b4e8}
	I1208 10:50:13.675052    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:3a:ef:9c:ed:4:db ID:1,3a:ef:9c:ed:4:db Lease:0x6574b4cd}
	I1208 10:50:13.675059    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:fe:4d:d5:9f:24:c4 ID:1,fe:4d:d5:9f:24:c4 Lease:0x6574b4c0}
	I1208 10:50:13.675068    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:ba:cd:53:1f:22:ab ID:1,ba:cd:53:1f:22:ab Lease:0x6574b49e}
	I1208 10:50:13.675075    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ce:f5:b5:c4:fb:c8 ID:1,ce:f5:b5:c4:fb:c8 Lease:0x65736314}
	I1208 10:50:13.675084    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:be:e9:f9:29:2c:83 ID:1,be:e9:f9:29:2c:83 Lease:0x6574b462}
	I1208 10:50:13.675092    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:9a:b4:72:2b:86:59 ID:1,9a:b4:72:2b:86:59 Lease:0x6574b3f8}
	I1208 10:50:13.675101    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:f2:b9:4e:4a:8c:a6 ID:1,f2:b9:4e:4a:8c:a6 Lease:0x6574b38b}
	I1208 10:50:13.675110    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:22:82:7b:b2:ac:da ID:1,22:82:7b:b2:ac:da Lease:0x6574b356}
	I1208 10:50:13.675118    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:de:47:4b:25:2a:46 ID:1,de:47:4b:25:2a:46 Lease:0x6574b23c}
	I1208 10:50:13.675132    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:e2:24:15:fe:eb:dd ID:1,e2:24:15:fe:eb:dd Lease:0x65736031}
	I1208 10:50:13.675149    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:c:96:6c:84:bf ID:1,fa:c:96:6c:84:bf Lease:0x6574b200}
	I1208 10:50:13.675158    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:d3:91:19:be:63 ID:1,86:d3:91:19:be:63 Lease:0x6574b1cc}
	I1208 10:50:13.675167    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:6a:dd:20:0:d1:28 ID:1,6a:dd:20:0:d1:28 Lease:0x65735ea2}
	I1208 10:50:13.675177    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fe:85:2:44:e3:c ID:1,fe:85:2:44:e3:c Lease:0x65735e8d}
	I1208 10:50:13.675186    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:da:9c:f9:88:b3:17 ID:1,da:9c:f9:88:b3:17 Lease:0x6574afd8}
	I1208 10:50:13.675208    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:5a:d6:c2:1e:af:27 ID:1,5a:d6:c2:1e:af:27 Lease:0x6574afb3}
	I1208 10:50:13.675222    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:6:95:a1:20:d1:95 ID:1,6:95:a1:20:d1:95 Lease:0x6574af76}
	I1208 10:50:13.675236    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:6e:ce:da:98:ef:83 ID:1,6e:ce:da:98:ef:83 Lease:0x6574af04}
	I1208 10:50:13.675247    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e6:66:8f:7e:be:1b ID:1,e6:66:8f:7e:be:1b Lease:0x65735d6e}
	I1208 10:50:13.675255    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:a7:30:b6:e9:1e ID:1,1a:a7:30:b6:e9:1e Lease:0x6574ade8}
	I1208 10:50:13.675264    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:54:d0:4d:4d:3b ID:1,de:54:d0:4d:4d:3b Lease:0x6574adbc}
	I1208 10:50:13.675283    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:46:9f:cb:fd:ea:4f ID:1,46:9f:cb:fd:ea:4f Lease:0x6574ada8}
	I1208 10:50:13.675297    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x6574ad53}
	I1208 10:50:15.674960    5593 main.go:141] libmachine: (calico-387000) DBG | Attempt 2
	I1208 10:50:15.674979    5593 main.go:141] libmachine: (calico-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:50:15.675035    5593 main.go:141] libmachine: (calico-387000) DBG | hyperkit pid from json: 5602
	I1208 10:50:15.675848    5593 main.go:141] libmachine: (calico-387000) DBG | Searching for 16:aa:5c:9:9d:be in /var/db/dhcpd_leases ...
	I1208 10:50:15.675914    5593 main.go:141] libmachine: (calico-387000) DBG | Found 32 entries in /var/db/dhcpd_leases!
	I1208 10:50:15.675928    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:d6:5e:3f:47:d:21 ID:1,d6:5e:3f:47:d:21 Lease:0x6574b6cc}
	I1208 10:50:15.675947    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:6a:df:ff:a:b0:c8 ID:1,6a:df:ff:a:b0:c8 Lease:0x6574b69c}
	I1208 10:50:15.675956    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:22:55:de:79:29:42 ID:1,22:55:de:79:29:42 Lease:0x65736541}
	I1208 10:50:15.675967    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:b6:dd:a9:ab:e:51 ID:1,b6:dd:a9:ab:e:51 Lease:0x657364fb}
	I1208 10:50:15.675977    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:ce:75:11:10:e2:e0 ID:1,ce:75:11:10:e2:e0 Lease:0x6574b63c}
	I1208 10:50:15.675987    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6:46:60:8d:22:4b ID:1,6:46:60:8d:22:4b Lease:0x657364c5}
	I1208 10:50:15.675998    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:4e:72:4c:b1:5e:d5 ID:1,4e:72:4c:b1:5e:d5 Lease:0x6574b62a}
	I1208 10:50:15.676020    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:7a:77:8e:23:b2:e ID:1,7a:77:8e:23:b2:e Lease:0x6574b51e}
	I1208 10:50:15.676042    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:86:e9:cf:16:9f:cb ID:1,86:e9:cf:16:9f:cb Lease:0x6574b4e8}
	I1208 10:50:15.676055    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:3a:ef:9c:ed:4:db ID:1,3a:ef:9c:ed:4:db Lease:0x6574b4cd}
	I1208 10:50:15.676072    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:fe:4d:d5:9f:24:c4 ID:1,fe:4d:d5:9f:24:c4 Lease:0x6574b4c0}
	I1208 10:50:15.676082    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:ba:cd:53:1f:22:ab ID:1,ba:cd:53:1f:22:ab Lease:0x6574b49e}
	I1208 10:50:15.676099    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ce:f5:b5:c4:fb:c8 ID:1,ce:f5:b5:c4:fb:c8 Lease:0x65736314}
	I1208 10:50:15.676108    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:be:e9:f9:29:2c:83 ID:1,be:e9:f9:29:2c:83 Lease:0x6574b462}
	I1208 10:50:15.676116    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:9a:b4:72:2b:86:59 ID:1,9a:b4:72:2b:86:59 Lease:0x6574b3f8}
	I1208 10:50:15.676124    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:f2:b9:4e:4a:8c:a6 ID:1,f2:b9:4e:4a:8c:a6 Lease:0x6574b38b}
	I1208 10:50:15.676132    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:22:82:7b:b2:ac:da ID:1,22:82:7b:b2:ac:da Lease:0x6574b356}
	I1208 10:50:15.676148    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:de:47:4b:25:2a:46 ID:1,de:47:4b:25:2a:46 Lease:0x6574b23c}
	I1208 10:50:15.676155    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:e2:24:15:fe:eb:dd ID:1,e2:24:15:fe:eb:dd Lease:0x65736031}
	I1208 10:50:15.676163    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:c:96:6c:84:bf ID:1,fa:c:96:6c:84:bf Lease:0x6574b200}
	I1208 10:50:15.676169    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:d3:91:19:be:63 ID:1,86:d3:91:19:be:63 Lease:0x6574b1cc}
	I1208 10:50:15.676177    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:6a:dd:20:0:d1:28 ID:1,6a:dd:20:0:d1:28 Lease:0x65735ea2}
	I1208 10:50:15.676184    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fe:85:2:44:e3:c ID:1,fe:85:2:44:e3:c Lease:0x65735e8d}
	I1208 10:50:15.676193    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:da:9c:f9:88:b3:17 ID:1,da:9c:f9:88:b3:17 Lease:0x6574afd8}
	I1208 10:50:15.676200    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:5a:d6:c2:1e:af:27 ID:1,5a:d6:c2:1e:af:27 Lease:0x6574afb3}
	I1208 10:50:15.676211    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:6:95:a1:20:d1:95 ID:1,6:95:a1:20:d1:95 Lease:0x6574af76}
	I1208 10:50:15.676219    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:6e:ce:da:98:ef:83 ID:1,6e:ce:da:98:ef:83 Lease:0x6574af04}
	I1208 10:50:15.676227    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e6:66:8f:7e:be:1b ID:1,e6:66:8f:7e:be:1b Lease:0x65735d6e}
	I1208 10:50:15.676235    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:a7:30:b6:e9:1e ID:1,1a:a7:30:b6:e9:1e Lease:0x6574ade8}
	I1208 10:50:15.676243    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:54:d0:4d:4d:3b ID:1,de:54:d0:4d:4d:3b Lease:0x6574adbc}
	I1208 10:50:15.676251    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:46:9f:cb:fd:ea:4f ID:1,46:9f:cb:fd:ea:4f Lease:0x6574ada8}
	I1208 10:50:15.676260    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x6574ad53}
	I1208 10:50:17.208962    5593 main.go:141] libmachine: (calico-387000) DBG | 2023/12/08 10:50:17 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1208 10:50:17.209021    5593 main.go:141] libmachine: (calico-387000) DBG | 2023/12/08 10:50:17 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1208 10:50:17.209031    5593 main.go:141] libmachine: (calico-387000) DBG | 2023/12/08 10:50:17 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1208 10:50:17.677216    5593 main.go:141] libmachine: (calico-387000) DBG | Attempt 3
	I1208 10:50:17.677234    5593 main.go:141] libmachine: (calico-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:50:17.677283    5593 main.go:141] libmachine: (calico-387000) DBG | hyperkit pid from json: 5602
	I1208 10:50:17.678099    5593 main.go:141] libmachine: (calico-387000) DBG | Searching for 16:aa:5c:9:9d:be in /var/db/dhcpd_leases ...
	I1208 10:50:17.678181    5593 main.go:141] libmachine: (calico-387000) DBG | Found 32 entries in /var/db/dhcpd_leases!
	I1208 10:50:17.678194    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:d6:5e:3f:47:d:21 ID:1,d6:5e:3f:47:d:21 Lease:0x6574b6cc}
	I1208 10:50:17.678205    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:6a:df:ff:a:b0:c8 ID:1,6a:df:ff:a:b0:c8 Lease:0x6574b69c}
	I1208 10:50:17.678213    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:22:55:de:79:29:42 ID:1,22:55:de:79:29:42 Lease:0x65736541}
	I1208 10:50:17.678220    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:b6:dd:a9:ab:e:51 ID:1,b6:dd:a9:ab:e:51 Lease:0x657364fb}
	I1208 10:50:17.678227    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:ce:75:11:10:e2:e0 ID:1,ce:75:11:10:e2:e0 Lease:0x6574b63c}
	I1208 10:50:17.678234    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6:46:60:8d:22:4b ID:1,6:46:60:8d:22:4b Lease:0x657364c5}
	I1208 10:50:17.678241    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:4e:72:4c:b1:5e:d5 ID:1,4e:72:4c:b1:5e:d5 Lease:0x6574b62a}
	I1208 10:50:17.678248    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:7a:77:8e:23:b2:e ID:1,7a:77:8e:23:b2:e Lease:0x6574b51e}
	I1208 10:50:17.678263    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:86:e9:cf:16:9f:cb ID:1,86:e9:cf:16:9f:cb Lease:0x6574b4e8}
	I1208 10:50:17.678277    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:3a:ef:9c:ed:4:db ID:1,3a:ef:9c:ed:4:db Lease:0x6574b4cd}
	I1208 10:50:17.678295    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:fe:4d:d5:9f:24:c4 ID:1,fe:4d:d5:9f:24:c4 Lease:0x6574b4c0}
	I1208 10:50:17.678305    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:ba:cd:53:1f:22:ab ID:1,ba:cd:53:1f:22:ab Lease:0x6574b49e}
	I1208 10:50:17.678312    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ce:f5:b5:c4:fb:c8 ID:1,ce:f5:b5:c4:fb:c8 Lease:0x65736314}
	I1208 10:50:17.678322    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:be:e9:f9:29:2c:83 ID:1,be:e9:f9:29:2c:83 Lease:0x6574b462}
	I1208 10:50:17.678331    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:9a:b4:72:2b:86:59 ID:1,9a:b4:72:2b:86:59 Lease:0x6574b3f8}
	I1208 10:50:17.678340    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:f2:b9:4e:4a:8c:a6 ID:1,f2:b9:4e:4a:8c:a6 Lease:0x6574b38b}
	I1208 10:50:17.678348    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:22:82:7b:b2:ac:da ID:1,22:82:7b:b2:ac:da Lease:0x6574b356}
	I1208 10:50:17.678357    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:de:47:4b:25:2a:46 ID:1,de:47:4b:25:2a:46 Lease:0x6574b23c}
	I1208 10:50:17.678365    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:e2:24:15:fe:eb:dd ID:1,e2:24:15:fe:eb:dd Lease:0x65736031}
	I1208 10:50:17.678371    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:c:96:6c:84:bf ID:1,fa:c:96:6c:84:bf Lease:0x6574b200}
	I1208 10:50:17.678389    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:d3:91:19:be:63 ID:1,86:d3:91:19:be:63 Lease:0x6574b1cc}
	I1208 10:50:17.678403    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:6a:dd:20:0:d1:28 ID:1,6a:dd:20:0:d1:28 Lease:0x65735ea2}
	I1208 10:50:17.678425    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fe:85:2:44:e3:c ID:1,fe:85:2:44:e3:c Lease:0x65735e8d}
	I1208 10:50:17.678441    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:da:9c:f9:88:b3:17 ID:1,da:9c:f9:88:b3:17 Lease:0x6574afd8}
	I1208 10:50:17.678462    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:5a:d6:c2:1e:af:27 ID:1,5a:d6:c2:1e:af:27 Lease:0x6574afb3}
	I1208 10:50:17.678474    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:6:95:a1:20:d1:95 ID:1,6:95:a1:20:d1:95 Lease:0x6574af76}
	I1208 10:50:17.678490    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:6e:ce:da:98:ef:83 ID:1,6e:ce:da:98:ef:83 Lease:0x6574af04}
	I1208 10:50:17.678502    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e6:66:8f:7e:be:1b ID:1,e6:66:8f:7e:be:1b Lease:0x65735d6e}
	I1208 10:50:17.678512    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:a7:30:b6:e9:1e ID:1,1a:a7:30:b6:e9:1e Lease:0x6574ade8}
	I1208 10:50:17.678520    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:54:d0:4d:4d:3b ID:1,de:54:d0:4d:4d:3b Lease:0x6574adbc}
	I1208 10:50:17.678528    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:46:9f:cb:fd:ea:4f ID:1,46:9f:cb:fd:ea:4f Lease:0x6574ada8}
	I1208 10:50:17.678538    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x6574ad53}
	I1208 10:50:19.678835    5593 main.go:141] libmachine: (calico-387000) DBG | Attempt 4
	I1208 10:50:19.678854    5593 main.go:141] libmachine: (calico-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:50:19.678942    5593 main.go:141] libmachine: (calico-387000) DBG | hyperkit pid from json: 5602
	I1208 10:50:19.679865    5593 main.go:141] libmachine: (calico-387000) DBG | Searching for 16:aa:5c:9:9d:be in /var/db/dhcpd_leases ...
	I1208 10:50:19.679935    5593 main.go:141] libmachine: (calico-387000) DBG | Found 32 entries in /var/db/dhcpd_leases!
	I1208 10:50:19.679954    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:d6:5e:3f:47:d:21 ID:1,d6:5e:3f:47:d:21 Lease:0x6574b6cc}
	I1208 10:50:19.679977    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:6a:df:ff:a:b0:c8 ID:1,6a:df:ff:a:b0:c8 Lease:0x6574b69c}
	I1208 10:50:19.679989    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:22:55:de:79:29:42 ID:1,22:55:de:79:29:42 Lease:0x65736541}
	I1208 10:50:19.680002    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:b6:dd:a9:ab:e:51 ID:1,b6:dd:a9:ab:e:51 Lease:0x657364fb}
	I1208 10:50:19.680010    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:ce:75:11:10:e2:e0 ID:1,ce:75:11:10:e2:e0 Lease:0x6574b63c}
	I1208 10:50:19.680017    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6:46:60:8d:22:4b ID:1,6:46:60:8d:22:4b Lease:0x657364c5}
	I1208 10:50:19.680024    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:4e:72:4c:b1:5e:d5 ID:1,4e:72:4c:b1:5e:d5 Lease:0x6574b62a}
	I1208 10:50:19.680033    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:7a:77:8e:23:b2:e ID:1,7a:77:8e:23:b2:e Lease:0x6574b51e}
	I1208 10:50:19.680041    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:86:e9:cf:16:9f:cb ID:1,86:e9:cf:16:9f:cb Lease:0x6574b4e8}
	I1208 10:50:19.680048    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:3a:ef:9c:ed:4:db ID:1,3a:ef:9c:ed:4:db Lease:0x6574b4cd}
	I1208 10:50:19.680055    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:fe:4d:d5:9f:24:c4 ID:1,fe:4d:d5:9f:24:c4 Lease:0x6574b4c0}
	I1208 10:50:19.680074    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:ba:cd:53:1f:22:ab ID:1,ba:cd:53:1f:22:ab Lease:0x6574b49e}
	I1208 10:50:19.680092    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ce:f5:b5:c4:fb:c8 ID:1,ce:f5:b5:c4:fb:c8 Lease:0x65736314}
	I1208 10:50:19.680102    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:be:e9:f9:29:2c:83 ID:1,be:e9:f9:29:2c:83 Lease:0x6574b462}
	I1208 10:50:19.680114    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:9a:b4:72:2b:86:59 ID:1,9a:b4:72:2b:86:59 Lease:0x6574b3f8}
	I1208 10:50:19.680122    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:f2:b9:4e:4a:8c:a6 ID:1,f2:b9:4e:4a:8c:a6 Lease:0x6574b38b}
	I1208 10:50:19.680147    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:22:82:7b:b2:ac:da ID:1,22:82:7b:b2:ac:da Lease:0x6574b356}
	I1208 10:50:19.680162    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:de:47:4b:25:2a:46 ID:1,de:47:4b:25:2a:46 Lease:0x6574b23c}
	I1208 10:50:19.680174    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:e2:24:15:fe:eb:dd ID:1,e2:24:15:fe:eb:dd Lease:0x65736031}
	I1208 10:50:19.680188    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:c:96:6c:84:bf ID:1,fa:c:96:6c:84:bf Lease:0x6574b200}
	I1208 10:50:19.680198    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:d3:91:19:be:63 ID:1,86:d3:91:19:be:63 Lease:0x6574b1cc}
	I1208 10:50:19.680212    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:6a:dd:20:0:d1:28 ID:1,6a:dd:20:0:d1:28 Lease:0x65735ea2}
	I1208 10:50:19.680221    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:fe:85:2:44:e3:c ID:1,fe:85:2:44:e3:c Lease:0x65735e8d}
	I1208 10:50:19.680231    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:da:9c:f9:88:b3:17 ID:1,da:9c:f9:88:b3:17 Lease:0x6574afd8}
	I1208 10:50:19.680240    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:5a:d6:c2:1e:af:27 ID:1,5a:d6:c2:1e:af:27 Lease:0x6574afb3}
	I1208 10:50:19.680250    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:6:95:a1:20:d1:95 ID:1,6:95:a1:20:d1:95 Lease:0x6574af76}
	I1208 10:50:19.680258    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:6e:ce:da:98:ef:83 ID:1,6e:ce:da:98:ef:83 Lease:0x6574af04}
	I1208 10:50:19.680272    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:e6:66:8f:7e:be:1b ID:1,e6:66:8f:7e:be:1b Lease:0x65735d6e}
	I1208 10:50:19.680280    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:a7:30:b6:e9:1e ID:1,1a:a7:30:b6:e9:1e Lease:0x6574ade8}
	I1208 10:50:19.680294    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:54:d0:4d:4d:3b ID:1,de:54:d0:4d:4d:3b Lease:0x6574adbc}
	I1208 10:50:19.680308    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:46:9f:cb:fd:ea:4f ID:1,46:9f:cb:fd:ea:4f Lease:0x6574ada8}
	I1208 10:50:19.680323    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x6574ad53}
	I1208 10:50:21.681520    5593 main.go:141] libmachine: (calico-387000) DBG | Attempt 5
	I1208 10:50:21.681549    5593 main.go:141] libmachine: (calico-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:50:21.681689    5593 main.go:141] libmachine: (calico-387000) DBG | hyperkit pid from json: 5602
	I1208 10:50:21.683201    5593 main.go:141] libmachine: (calico-387000) DBG | Searching for 16:aa:5c:9:9d:be in /var/db/dhcpd_leases ...
	I1208 10:50:21.683346    5593 main.go:141] libmachine: (calico-387000) DBG | Found 33 entries in /var/db/dhcpd_leases!
	I1208 10:50:21.683371    5593 main.go:141] libmachine: (calico-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:16:aa:5c:9:9d:be ID:1,16:aa:5c:9:9d:be Lease:0x6574b6ec}
	I1208 10:50:21.683422    5593 main.go:141] libmachine: (calico-387000) DBG | Found match: 16:aa:5c:9:9d:be
	I1208 10:50:21.683441    5593 main.go:141] libmachine: (calico-387000) DBG | IP: 192.169.0.34
	I1208 10:50:21.683444    5593 main.go:141] libmachine: (calico-387000) Calling .GetConfigRaw
	I1208 10:50:21.684219    5593 main.go:141] libmachine: (calico-387000) Calling .DriverName
	I1208 10:50:21.684380    5593 main.go:141] libmachine: (calico-387000) Calling .DriverName
	I1208 10:50:21.684517    5593 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1208 10:50:21.684538    5593 main.go:141] libmachine: (calico-387000) Calling .GetState
	I1208 10:50:21.684645    5593 main.go:141] libmachine: (calico-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:50:21.684717    5593 main.go:141] libmachine: (calico-387000) DBG | hyperkit pid from json: 5602
	I1208 10:50:21.685748    5593 main.go:141] libmachine: Detecting operating system of created instance...
	I1208 10:50:21.685760    5593 main.go:141] libmachine: Waiting for SSH to be available...
	I1208 10:50:21.685765    5593 main.go:141] libmachine: Getting to WaitForSSH function...
	I1208 10:50:21.685771    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHHostname
	I1208 10:50:21.685850    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHPort
	I1208 10:50:21.685943    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I1208 10:50:21.686028    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I1208 10:50:21.686107    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHUsername
	I1208 10:50:21.686208    5593 main.go:141] libmachine: Using SSH client type: native
	I1208 10:50:21.686503    5593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I1208 10:50:21.686510    5593 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1208 10:50:21.747223    5593 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1208 10:50:21.747237    5593 main.go:141] libmachine: Detecting the provisioner...
	I1208 10:50:21.747243    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHHostname
	I1208 10:50:21.747380    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHPort
	I1208 10:50:21.747463    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I1208 10:50:21.747559    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I1208 10:50:21.747647    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHUsername
	I1208 10:50:21.747800    5593 main.go:141] libmachine: Using SSH client type: native
	I1208 10:50:21.748067    5593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I1208 10:50:21.748076    5593 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1208 10:50:21.809750    5593 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0ec83c8-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1208 10:50:21.809802    5593 main.go:141] libmachine: found compatible host: buildroot
	I1208 10:50:21.809809    5593 main.go:141] libmachine: Provisioning with buildroot...
	I1208 10:50:21.809815    5593 main.go:141] libmachine: (calico-387000) Calling .GetMachineName
	I1208 10:50:21.809948    5593 buildroot.go:166] provisioning hostname "calico-387000"
	I1208 10:50:21.809960    5593 main.go:141] libmachine: (calico-387000) Calling .GetMachineName
	I1208 10:50:21.810073    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHHostname
	I1208 10:50:21.810154    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHPort
	I1208 10:50:21.810252    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I1208 10:50:21.810338    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I1208 10:50:21.810422    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHUsername
	I1208 10:50:21.810551    5593 main.go:141] libmachine: Using SSH client type: native
	I1208 10:50:21.810796    5593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I1208 10:50:21.810805    5593 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-387000 && echo "calico-387000" | sudo tee /etc/hostname
	I1208 10:50:21.880940    5593 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-387000
	
	I1208 10:50:21.880960    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHHostname
	I1208 10:50:21.881086    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHPort
	I1208 10:50:21.881206    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I1208 10:50:21.881286    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I1208 10:50:21.881383    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHUsername
	I1208 10:50:21.881524    5593 main.go:141] libmachine: Using SSH client type: native
	I1208 10:50:21.881803    5593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I1208 10:50:21.881816    5593 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-387000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-387000/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-387000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 10:50:21.949534    5593 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1208 10:50:21.949555    5593 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17738-1113/.minikube CaCertPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17738-1113/.minikube}
	I1208 10:50:21.949566    5593 buildroot.go:174] setting up certificates
	I1208 10:50:21.949579    5593 provision.go:83] configureAuth start
	I1208 10:50:21.949586    5593 main.go:141] libmachine: (calico-387000) Calling .GetMachineName
	I1208 10:50:21.949715    5593 main.go:141] libmachine: (calico-387000) Calling .GetIP
	I1208 10:50:21.949817    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHHostname
	I1208 10:50:21.949910    5593 provision.go:138] copyHostCerts
	I1208 10:50:21.949983    5593 exec_runner.go:144] found /Users/jenkins/minikube-integration/17738-1113/.minikube/ca.pem, removing ...
	I1208 10:50:21.949993    5593 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17738-1113/.minikube/ca.pem
	I1208 10:50:21.950150    5593 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17738-1113/.minikube/ca.pem (1078 bytes)
	I1208 10:50:21.950406    5593 exec_runner.go:144] found /Users/jenkins/minikube-integration/17738-1113/.minikube/cert.pem, removing ...
	I1208 10:50:21.950413    5593 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17738-1113/.minikube/cert.pem
	I1208 10:50:21.950489    5593 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17738-1113/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17738-1113/.minikube/cert.pem (1123 bytes)
	I1208 10:50:21.950666    5593 exec_runner.go:144] found /Users/jenkins/minikube-integration/17738-1113/.minikube/key.pem, removing ...
	I1208 10:50:21.950672    5593 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17738-1113/.minikube/key.pem
	I1208 10:50:21.950748    5593 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17738-1113/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17738-1113/.minikube/key.pem (1679 bytes)
	I1208 10:50:21.950891    5593 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca-key.pem org=jenkins.calico-387000 san=[192.169.0.34 192.169.0.34 localhost 127.0.0.1 minikube calico-387000]
	I1208 10:50:22.037490    5593 provision.go:172] copyRemoteCerts
	I1208 10:50:22.037557    5593 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 10:50:22.037574    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHHostname
	I1208 10:50:22.037721    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHPort
	I1208 10:50:22.037804    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I1208 10:50:22.037922    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHUsername
	I1208 10:50:22.038009    5593 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/id_rsa Username:docker}
	I1208 10:50:22.074275    5593 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17738-1113/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 10:50:22.090234    5593 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1208 10:50:22.105965    5593 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17738-1113/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 10:50:22.122470    5593 provision.go:86] duration metric: configureAuth took 172.879163ms
	I1208 10:50:22.122483    5593 buildroot.go:189] setting minikube options for container-runtime
	I1208 10:50:22.122612    5593 config.go:182] Loaded profile config "calico-387000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1208 10:50:22.122626    5593 main.go:141] libmachine: (calico-387000) Calling .DriverName
	I1208 10:50:22.122752    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHHostname
	I1208 10:50:22.122845    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHPort
	I1208 10:50:22.122945    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I1208 10:50:22.123029    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I1208 10:50:22.123102    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHUsername
	I1208 10:50:22.123212    5593 main.go:141] libmachine: Using SSH client type: native
	I1208 10:50:22.123450    5593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I1208 10:50:22.123459    5593 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1208 10:50:22.184962    5593 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1208 10:50:22.184976    5593 buildroot.go:70] root file system type: tmpfs
	I1208 10:50:22.185049    5593 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1208 10:50:22.185068    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHHostname
	I1208 10:50:22.185191    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHPort
	I1208 10:50:22.185279    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I1208 10:50:22.185374    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I1208 10:50:22.185455    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHUsername
	I1208 10:50:22.185574    5593 main.go:141] libmachine: Using SSH client type: native
	I1208 10:50:22.185820    5593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I1208 10:50:22.185870    5593 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1208 10:50:22.257105    5593 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1208 10:50:22.257128    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHHostname
	I1208 10:50:22.257268    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHPort
	I1208 10:50:22.257360    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I1208 10:50:22.257457    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I1208 10:50:22.257539    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHUsername
	I1208 10:50:22.257666    5593 main.go:141] libmachine: Using SSH client type: native
	I1208 10:50:22.257929    5593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I1208 10:50:22.257943    5593 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1208 10:50:22.761876    5593 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1208 10:50:22.761912    5593 main.go:141] libmachine: Checking connection to Docker...
	I1208 10:50:22.761924    5593 main.go:141] libmachine: (calico-387000) Calling .GetURL
	I1208 10:50:22.762073    5593 main.go:141] libmachine: Docker is up and running!
	I1208 10:50:22.762081    5593 main.go:141] libmachine: Reticulating splines...
	I1208 10:50:22.762086    5593 client.go:171] LocalClient.Create took 11.852248361s
	I1208 10:50:22.762096    5593 start.go:167] duration metric: libmachine.API.Create for "calico-387000" took 11.852292172s
	I1208 10:50:22.762105    5593 start.go:300] post-start starting for "calico-387000" (driver="hyperkit")
	I1208 10:50:22.762118    5593 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 10:50:22.762128    5593 main.go:141] libmachine: (calico-387000) Calling .DriverName
	I1208 10:50:22.762299    5593 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 10:50:22.762312    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHHostname
	I1208 10:50:22.762410    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHPort
	I1208 10:50:22.762536    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I1208 10:50:22.762662    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHUsername
	I1208 10:50:22.762760    5593 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/id_rsa Username:docker}
	I1208 10:50:22.799140    5593 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 10:50:22.801898    5593 info.go:137] Remote host: Buildroot 2021.02.12
	I1208 10:50:22.801916    5593 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17738-1113/.minikube/addons for local assets ...
	I1208 10:50:22.802018    5593 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17738-1113/.minikube/files for local assets ...
	I1208 10:50:22.802193    5593 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17738-1113/.minikube/files/etc/ssl/certs/15852.pem -> 15852.pem in /etc/ssl/certs
	I1208 10:50:22.802395    5593 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 10:50:22.808392    5593 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17738-1113/.minikube/files/etc/ssl/certs/15852.pem --> /etc/ssl/certs/15852.pem (1708 bytes)
	I1208 10:50:22.825759    5593 start.go:303] post-start completed in 63.645722ms
	I1208 10:50:22.825786    5593 main.go:141] libmachine: (calico-387000) Calling .GetConfigRaw
	I1208 10:50:22.826371    5593 main.go:141] libmachine: (calico-387000) Calling .GetIP
	I1208 10:50:22.826514    5593 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/calico-387000/config.json ...
	I1208 10:50:22.826843    5593 start.go:128] duration metric: createHost completed in 11.989581135s
	I1208 10:50:22.826859    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHHostname
	I1208 10:50:22.826951    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHPort
	I1208 10:50:22.827041    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I1208 10:50:22.827111    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I1208 10:50:22.827186    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHUsername
	I1208 10:50:22.827290    5593 main.go:141] libmachine: Using SSH client type: native
	I1208 10:50:22.827530    5593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406620] 0x1409300 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I1208 10:50:22.827538    5593 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1208 10:50:22.889365    5593 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702061422.832332548
	
	I1208 10:50:22.889378    5593 fix.go:206] guest clock: 1702061422.832332548
	I1208 10:50:22.889384    5593 fix.go:219] Guest: 2023-12-08 10:50:22.832332548 -0800 PST Remote: 2023-12-08 10:50:22.826853 -0800 PST m=+12.530885950 (delta=5.479548ms)
	I1208 10:50:22.889401    5593 fix.go:190] guest clock delta is within tolerance: 5.479548ms
	I1208 10:50:22.889411    5593 start.go:83] releasing machines lock for "calico-387000", held for 12.052248451s
	I1208 10:50:22.889433    5593 main.go:141] libmachine: (calico-387000) Calling .DriverName
	I1208 10:50:22.889561    5593 main.go:141] libmachine: (calico-387000) Calling .GetIP
	I1208 10:50:22.889636    5593 main.go:141] libmachine: (calico-387000) Calling .DriverName
	I1208 10:50:22.889919    5593 main.go:141] libmachine: (calico-387000) Calling .DriverName
	I1208 10:50:22.890019    5593 main.go:141] libmachine: (calico-387000) Calling .DriverName
	I1208 10:50:22.890093    5593 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 10:50:22.890120    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHHostname
	I1208 10:50:22.890146    5593 ssh_runner.go:195] Run: cat /version.json
	I1208 10:50:22.890157    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHHostname
	I1208 10:50:22.890195    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHPort
	I1208 10:50:22.890262    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHPort
	I1208 10:50:22.890292    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I1208 10:50:22.890344    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I1208 10:50:22.890368    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHUsername
	I1208 10:50:22.890444    5593 main.go:141] libmachine: (calico-387000) Calling .GetSSHUsername
	I1208 10:50:22.890459    5593 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/id_rsa Username:docker}
	I1208 10:50:22.890518    5593 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/calico-387000/id_rsa Username:docker}
	I1208 10:50:22.923469    5593 ssh_runner.go:195] Run: systemctl --version
	I1208 10:50:22.927566    5593 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 10:50:22.979447    5593 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 10:50:22.979519    5593 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 10:50:22.990500    5593 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1208 10:50:22.990515    5593 start.go:475] detecting cgroup driver to use...
	I1208 10:50:22.990613    5593 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 10:50:23.004319    5593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1208 10:50:23.011567    5593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 10:50:23.018623    5593 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 10:50:23.018668    5593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 10:50:23.025820    5593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 10:50:23.034000    5593 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 10:50:23.041280    5593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 10:50:23.048295    5593 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 10:50:23.055503    5593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 10:50:23.062553    5593 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 10:50:23.069508    5593 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 10:50:23.075862    5593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 10:50:23.169002    5593 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 10:50:23.181980    5593 start.go:475] detecting cgroup driver to use...
	I1208 10:50:23.182055    5593 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1208 10:50:23.193678    5593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 10:50:23.203892    5593 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 10:50:23.215396    5593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 10:50:23.225158    5593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 10:50:23.233888    5593 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 10:50:23.306296    5593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 10:50:23.315744    5593 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 10:50:23.329085    5593 ssh_runner.go:195] Run: which cri-dockerd
	I1208 10:50:23.331683    5593 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1208 10:50:23.338293    5593 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1208 10:50:23.349794    5593 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1208 10:50:23.436643    5593 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1208 10:50:23.535436    5593 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1208 10:50:23.535519    5593 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1208 10:50:23.547082    5593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 10:50:23.630725    5593 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1208 10:50:25.056562    5593 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.425836558s)
	I1208 10:50:25.056635    5593 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1208 10:50:25.147940    5593 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1208 10:50:25.232744    5593 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1208 10:50:25.336729    5593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 10:50:25.431299    5593 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1208 10:50:25.442876    5593 ssh_runner.go:195] Run: sudo journalctl --no-pager -u cri-docker.socket
	I1208 10:50:25.524592    5593 out.go:177] 
	W1208 10:50:25.547977    5593 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Fri 2023-12-08 18:50:19 UTC, ends at Fri 2023-12-08 18:50:25 UTC. --
	Dec 08 18:50:20 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 08 18:50:20 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 08 18:50:22 calico-387000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 08 18:50:22 calico-387000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 08 18:50:22 calico-387000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 08 18:50:22 calico-387000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 08 18:50:22 calico-387000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 08 18:50:25 calico-387000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 08 18:50:25 calico-387000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 08 18:50:25 calico-387000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 08 18:50:25 calico-387000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 08 18:50:25 calico-387000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Fri 2023-12-08 18:50:19 UTC, ends at Fri 2023-12-08 18:50:25 UTC. --
	Dec 08 18:50:20 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 08 18:50:20 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 08 18:50:22 calico-387000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 08 18:50:22 calico-387000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 08 18:50:22 calico-387000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 08 18:50:22 calico-387000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 08 18:50:22 calico-387000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 08 18:50:25 calico-387000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 08 18:50:25 calico-387000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 08 18:50:25 calico-387000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 08 18:50:25 calico-387000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 08 18:50:25 calico-387000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	W1208 10:50:25.547997    5593 out.go:239] * 
	* 
	W1208 10:50:25.548734    5593 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 10:50:25.631802    5593 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 90
--- FAIL: TestNetworkPlugins/group/calico/Start (15.42s)

                                                
                                    

Test pass (284/310)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 16.19
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.31
10 TestDownloadOnly/v1.28.4/json-events 12.62
11 TestDownloadOnly/v1.28.4/preload-exists 0
14 TestDownloadOnly/v1.28.4/kubectl 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.32
17 TestDownloadOnly/v1.29.0-rc.1/json-events 27.42
18 TestDownloadOnly/v1.29.0-rc.1/preload-exists 0
21 TestDownloadOnly/v1.29.0-rc.1/kubectl 0
22 TestDownloadOnly/v1.29.0-rc.1/LogsDuration 0.29
23 TestDownloadOnly/DeleteAll 0.39
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.37
26 TestBinaryMirror 1.01
27 TestOffline 94.24
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
33 TestCertOptions 40.45
34 TestCertExpiration 252.23
35 TestDockerFlags 40.33
36 TestForceSystemdFlag 40.53
37 TestForceSystemdEnv 37.38
40 TestHyperKitDriverInstallOrUpdate 7
43 TestErrorSpam/setup 33.83
44 TestErrorSpam/start 1.54
45 TestErrorSpam/status 0.49
46 TestErrorSpam/pause 1.29
47 TestErrorSpam/unpause 1.29
48 TestErrorSpam/stop 5.67
51 TestFunctional/serial/CopySyncFile 0
52 TestFunctional/serial/StartWithProxy 52.56
53 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/SoftStart 38.14
55 TestFunctional/serial/KubeContext 0.04
56 TestFunctional/serial/KubectlGetPods 0.06
59 TestFunctional/serial/CacheCmd/cache/add_remote 3.16
60 TestFunctional/serial/CacheCmd/cache/add_local 1.45
61 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
62 TestFunctional/serial/CacheCmd/cache/list 0.08
63 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
64 TestFunctional/serial/CacheCmd/cache/cache_reload 1.07
65 TestFunctional/serial/CacheCmd/cache/delete 0.16
66 TestFunctional/serial/MinikubeKubectlCmd 0.53
67 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.77
68 TestFunctional/serial/ExtraConfig 40.75
69 TestFunctional/serial/ComponentHealth 0.05
70 TestFunctional/serial/LogsCmd 2.62
71 TestFunctional/serial/LogsFileCmd 2.7
72 TestFunctional/serial/InvalidService 4.12
74 TestFunctional/parallel/ConfigCmd 0.5
75 TestFunctional/parallel/DashboardCmd 14.04
76 TestFunctional/parallel/DryRun 0.94
77 TestFunctional/parallel/InternationalLanguage 0.46
78 TestFunctional/parallel/StatusCmd 0.49
82 TestFunctional/parallel/ServiceCmdConnect 7.56
83 TestFunctional/parallel/AddonsCmd 0.26
84 TestFunctional/parallel/PersistentVolumeClaim 28.45
86 TestFunctional/parallel/SSHCmd 0.29
87 TestFunctional/parallel/CpCmd 0.61
88 TestFunctional/parallel/MySQL 25.88
89 TestFunctional/parallel/FileSync 0.23
90 TestFunctional/parallel/CertSync 1.18
94 TestFunctional/parallel/NodeLabels 0.06
96 TestFunctional/parallel/NonActiveRuntimeDisabled 0.14
98 TestFunctional/parallel/License 0.58
99 TestFunctional/parallel/Version/short 0.1
100 TestFunctional/parallel/Version/components 0.66
101 TestFunctional/parallel/ImageCommands/ImageListShort 0.16
102 TestFunctional/parallel/ImageCommands/ImageListTable 0.17
103 TestFunctional/parallel/ImageCommands/ImageListJson 0.15
104 TestFunctional/parallel/ImageCommands/ImageListYaml 0.15
105 TestFunctional/parallel/ImageCommands/ImageBuild 2.34
106 TestFunctional/parallel/ImageCommands/Setup 2.57
107 TestFunctional/parallel/DockerEnv/bash 0.74
108 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
109 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
110 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
111 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.42
112 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.09
113 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.39
114 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.09
115 TestFunctional/parallel/ImageCommands/ImageRemove 0.37
116 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.31
117 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.3
118 TestFunctional/parallel/ServiceCmd/DeployApp 13.13
120 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.38
121 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.17
124 TestFunctional/parallel/ServiceCmd/List 0.37
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.25
127 TestFunctional/parallel/ServiceCmd/Format 0.26
128 TestFunctional/parallel/ServiceCmd/URL 0.25
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.3
136 TestFunctional/parallel/ProfileCmd/profile_list 0.28
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.27
138 TestFunctional/parallel/MountCmd/any-port 6.1
139 TestFunctional/parallel/MountCmd/specific-port 1.42
140 TestFunctional/parallel/MountCmd/VerifyCleanup 1.42
141 TestFunctional/delete_addon-resizer_images 0.12
142 TestFunctional/delete_my-image_image 0.05
143 TestFunctional/delete_minikube_cached_images 0.05
147 TestImageBuild/serial/Setup 37.99
148 TestImageBuild/serial/NormalBuild 1.34
149 TestImageBuild/serial/BuildWithBuildArg 0.74
150 TestImageBuild/serial/BuildWithDockerIgnore 0.24
151 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.22
154 TestIngressAddonLegacy/StartLegacyK8sCluster 73.5
156 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.84
157 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.53
158 TestIngressAddonLegacy/serial/ValidateIngressAddons 30.79
161 TestJSONOutput/start/Command 50.11
162 TestJSONOutput/start/Audit 0
164 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/pause/Command 0.47
168 TestJSONOutput/pause/Audit 0
170 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/unpause/Command 0.43
174 TestJSONOutput/unpause/Audit 0
176 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/stop/Command 8.16
180 TestJSONOutput/stop/Audit 0
182 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
184 TestErrorJSONOutput 0.8
189 TestMainNoArgs 0.08
193 TestMountStart/serial/StartWithMountFirst 16.11
194 TestMountStart/serial/VerifyMountFirst 0.32
195 TestMountStart/serial/StartWithMountSecond 16.41
196 TestMountStart/serial/VerifyMountSecond 0.31
197 TestMountStart/serial/DeleteFirst 2.35
198 TestMountStart/serial/VerifyMountPostDelete 0.31
199 TestMountStart/serial/Stop 2.23
200 TestMountStart/serial/RestartStopped 16.43
201 TestMountStart/serial/VerifyMountPostStop 0.31
204 TestMultiNode/serial/FreshStart2Nodes 160.41
205 TestMultiNode/serial/DeployApp2Nodes 4.44
206 TestMultiNode/serial/PingHostFrom2Pods 0.91
207 TestMultiNode/serial/AddNode 32.64
208 TestMultiNode/serial/MultiNodeLabels 0.05
209 TestMultiNode/serial/ProfileList 0.2
210 TestMultiNode/serial/CopyFile 5.33
211 TestMultiNode/serial/StopNode 2.72
212 TestMultiNode/serial/StartAfterStop 27.17
213 TestMultiNode/serial/RestartKeepsNodes 161.7
214 TestMultiNode/serial/DeleteNode 2.96
215 TestMultiNode/serial/StopMultiNode 16.47
216 TestMultiNode/serial/RestartMultiNode 109.91
221 TestPreload 150.73
223 TestScheduledStopUnix 105.56
224 TestSkaffold 108.99
227 TestRunningBinaryUpgrade 171.96
229 TestKubernetesUpgrade 147.24
242 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.52
243 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.86
244 TestStoppedBinaryUpgrade/Setup 1
246 TestStoppedBinaryUpgrade/MinikubeLogs 3.37
248 TestPause/serial/Start 49.39
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.52
258 TestNoKubernetes/serial/StartWithK8s 37.22
259 TestPause/serial/SecondStartNoReconfiguration 38.46
260 TestNoKubernetes/serial/StartWithStopK8s 16.59
261 TestNoKubernetes/serial/Start 18.08
262 TestPause/serial/Pause 0.54
263 TestPause/serial/VerifyStatus 0.16
264 TestPause/serial/Unpause 0.57
265 TestPause/serial/PauseAgain 0.57
266 TestPause/serial/DeletePaused 5.27
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.13
268 TestNoKubernetes/serial/ProfileList 29.82
269 TestPause/serial/VerifyDeletedResources 0.21
270 TestNetworkPlugins/group/auto/Start 48.98
271 TestNoKubernetes/serial/Stop 2.25
272 TestNoKubernetes/serial/StartNoArgs 17.15
273 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.13
274 TestNetworkPlugins/group/kindnet/Start 58.98
275 TestNetworkPlugins/group/auto/KubeletFlags 0.18
276 TestNetworkPlugins/group/auto/NetCatPod 12.18
277 TestNetworkPlugins/group/auto/DNS 0.13
278 TestNetworkPlugins/group/auto/Localhost 0.11
279 TestNetworkPlugins/group/auto/HairPin 0.1
281 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
282 TestNetworkPlugins/group/custom-flannel/Start 58.76
283 TestNetworkPlugins/group/kindnet/KubeletFlags 0.17
284 TestNetworkPlugins/group/kindnet/NetCatPod 11.19
285 TestNetworkPlugins/group/kindnet/DNS 0.13
286 TestNetworkPlugins/group/kindnet/Localhost 0.11
287 TestNetworkPlugins/group/kindnet/HairPin 0.1
288 TestNetworkPlugins/group/false/Start 59.74
289 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.17
290 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.18
291 TestNetworkPlugins/group/custom-flannel/DNS 0.13
292 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
293 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
294 TestNetworkPlugins/group/enable-default-cni/Start 53.25
295 TestNetworkPlugins/group/false/KubeletFlags 0.16
296 TestNetworkPlugins/group/false/NetCatPod 11.17
297 TestNetworkPlugins/group/false/DNS 0.14
298 TestNetworkPlugins/group/false/Localhost 0.11
299 TestNetworkPlugins/group/false/HairPin 0.11
300 TestNetworkPlugins/group/flannel/Start 58.45
301 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.16
302 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.17
303 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
304 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
305 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
306 TestNetworkPlugins/group/bridge/Start 50.73
307 TestNetworkPlugins/group/flannel/ControllerPod 5.01
308 TestNetworkPlugins/group/flannel/KubeletFlags 0.17
309 TestNetworkPlugins/group/flannel/NetCatPod 12.17
310 TestNetworkPlugins/group/flannel/DNS 0.12
311 TestNetworkPlugins/group/flannel/Localhost 0.1
312 TestNetworkPlugins/group/flannel/HairPin 0.11
313 TestNetworkPlugins/group/kubenet/Start 49.28
314 TestNetworkPlugins/group/bridge/KubeletFlags 0.16
315 TestNetworkPlugins/group/bridge/NetCatPod 11.17
316 TestNetworkPlugins/group/bridge/DNS 0.15
317 TestNetworkPlugins/group/bridge/Localhost 0.11
318 TestNetworkPlugins/group/bridge/HairPin 0.11
320 TestStartStop/group/old-k8s-version/serial/FirstStart 139.75
321 TestNetworkPlugins/group/kubenet/KubeletFlags 0.16
322 TestNetworkPlugins/group/kubenet/NetCatPod 11.23
323 TestNetworkPlugins/group/kubenet/DNS 32.85
324 TestNetworkPlugins/group/kubenet/Localhost 0.1
325 TestNetworkPlugins/group/kubenet/HairPin 0.1
327 TestStartStop/group/no-preload/serial/FirstStart 87.42
328 TestStartStop/group/old-k8s-version/serial/DeployApp 8.28
329 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.67
330 TestStartStop/group/old-k8s-version/serial/Stop 8.28
331 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.31
332 TestStartStop/group/old-k8s-version/serial/SecondStart 490.42
333 TestStartStop/group/no-preload/serial/DeployApp 9.57
334 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.95
335 TestStartStop/group/no-preload/serial/Stop 8.29
336 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.32
337 TestStartStop/group/no-preload/serial/SecondStart 297.83
338 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
339 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
340 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.16
341 TestStartStop/group/no-preload/serial/Pause 1.96
343 TestStartStop/group/embed-certs/serial/FirstStart 60.68
344 TestStartStop/group/embed-certs/serial/DeployApp 8.26
345 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.79
346 TestStartStop/group/embed-certs/serial/Stop 8.25
347 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.26
348 TestStartStop/group/embed-certs/serial/SecondStart 300.16
349 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
350 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
351 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.17
352 TestStartStop/group/old-k8s-version/serial/Pause 1.78
354 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 50.84
355 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.24
356 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.87
357 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.26
358 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.32
359 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 294.24
360 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
361 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
362 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.16
363 TestStartStop/group/embed-certs/serial/Pause 1.93
365 TestStartStop/group/newest-cni/serial/FirstStart 48.13
366 TestStartStop/group/newest-cni/serial/DeployApp 0
367 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.97
368 TestStartStop/group/newest-cni/serial/Stop 8.31
369 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.32
370 TestStartStop/group/newest-cni/serial/SecondStart 37.52
371 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
372 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
373 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.17
374 TestStartStop/group/newest-cni/serial/Pause 1.78
375 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
376 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
377 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.16
378 TestStartStop/group/default-k8s-diff-port/serial/Pause 1.79
x
+
TestDownloadOnly/v1.16.0/json-events (16.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-396000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-396000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit : (16.193512312s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (16.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-396000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-396000: exit status 85 (312.465822ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-396000 | jenkins | v1.32.0 | 08 Dec 23 10:09 PST |          |
	|         | -p download-only-396000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/08 10:09:38
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.4 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 10:09:38.347656    1587 out.go:296] Setting OutFile to fd 1 ...
	I1208 10:09:38.347959    1587 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 10:09:38.347965    1587 out.go:309] Setting ErrFile to fd 2...
	I1208 10:09:38.347969    1587 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 10:09:38.348143    1587 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17738-1113/.minikube/bin
	W1208 10:09:38.348240    1587 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17738-1113/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17738-1113/.minikube/config/config.json: no such file or directory
	I1208 10:09:38.349918    1587 out.go:303] Setting JSON to true
	I1208 10:09:38.373588    1587 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":534,"bootTime":1702058444,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1208 10:09:38.373700    1587 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1208 10:09:38.395402    1587 out.go:97] [download-only-396000] minikube v1.32.0 on Darwin 14.1.2
	I1208 10:09:38.417073    1587 out.go:169] MINIKUBE_LOCATION=17738
	W1208 10:09:38.395609    1587 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/preloaded-tarball: no such file or directory
	I1208 10:09:38.395661    1587 notify.go:220] Checking for updates...
	I1208 10:09:38.461117    1587 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17738-1113/kubeconfig
	I1208 10:09:38.482980    1587 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1208 10:09:38.504095    1587 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 10:09:38.525080    1587 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17738-1113/.minikube
	W1208 10:09:38.566972    1587 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1208 10:09:38.567274    1587 driver.go:392] Setting default libvirt URI to qemu:///system
	I1208 10:09:38.662271    1587 out.go:97] Using the hyperkit driver based on user configuration
	I1208 10:09:38.662322    1587 start.go:298] selected driver: hyperkit
	I1208 10:09:38.662332    1587 start.go:902] validating driver "hyperkit" against <nil>
	I1208 10:09:38.662523    1587 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 10:09:38.662796    1587 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17738-1113/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1208 10:09:38.804374    1587 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1208 10:09:38.808671    1587 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:09:38.808694    1587 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1208 10:09:38.808721    1587 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1208 10:09:38.813052    1587 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I1208 10:09:38.813226    1587 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1208 10:09:38.813259    1587 cni.go:84] Creating CNI manager for ""
	I1208 10:09:38.813271    1587 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1208 10:09:38.813278    1587 start_flags.go:323] config:
	{Name:download-only-396000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-396000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1208 10:09:38.813544    1587 iso.go:125] acquiring lock: {Name:mk933f5286cca8a935e46c54218c5cced15285e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 10:09:38.841281    1587 out.go:97] Downloading VM boot image ...
	I1208 10:09:38.841392    1587 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso
	I1208 10:09:45.871332    1587 out.go:97] Starting control plane node download-only-396000 in cluster download-only-396000
	I1208 10:09:45.871371    1587 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1208 10:09:45.922931    1587 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1208 10:09:45.922962    1587 cache.go:56] Caching tarball of preloaded images
	I1208 10:09:45.923301    1587 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1208 10:09:45.946973    1587 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1208 10:09:45.947000    1587 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1208 10:09:46.026751    1587 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1208 10:09:51.316472    1587 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1208 10:09:51.316663    1587 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1208 10:09:51.920040    1587 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1208 10:09:51.920268    1587 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/download-only-396000/config.json ...
	I1208 10:09:51.920291    1587 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/download-only-396000/config.json: {Name:mk1c42afab8d07d4eaf84764d66fec9699e5fe08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 10:09:51.920591    1587 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1208 10:09:51.920875    1587 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-396000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (12.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-396000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-396000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperkit : (12.614918119s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (12.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-396000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-396000: exit status 85 (315.644663ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-396000 | jenkins | v1.32.0 | 08 Dec 23 10:09 PST |          |
	|         | -p download-only-396000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-396000 | jenkins | v1.32.0 | 08 Dec 23 10:09 PST |          |
	|         | -p download-only-396000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/08 10:09:54
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.4 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 10:09:54.857621    1606 out.go:296] Setting OutFile to fd 1 ...
	I1208 10:09:54.857926    1606 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 10:09:54.857932    1606 out.go:309] Setting ErrFile to fd 2...
	I1208 10:09:54.857937    1606 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 10:09:54.858123    1606 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17738-1113/.minikube/bin
	W1208 10:09:54.858225    1606 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17738-1113/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17738-1113/.minikube/config/config.json: no such file or directory
	I1208 10:09:54.859621    1606 out.go:303] Setting JSON to true
	I1208 10:09:54.882052    1606 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":550,"bootTime":1702058444,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1208 10:09:54.882162    1606 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1208 10:09:54.903749    1606 out.go:97] [download-only-396000] minikube v1.32.0 on Darwin 14.1.2
	I1208 10:09:54.924768    1606 out.go:169] MINIKUBE_LOCATION=17738
	I1208 10:09:54.903867    1606 notify.go:220] Checking for updates...
	I1208 10:09:54.966606    1606 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17738-1113/kubeconfig
	I1208 10:09:54.987833    1606 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1208 10:09:55.011138    1606 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 10:09:55.032766    1606 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17738-1113/.minikube
	W1208 10:09:55.076895    1606 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1208 10:09:55.077642    1606 config.go:182] Loaded profile config "download-only-396000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1208 10:09:55.077726    1606 start.go:810] api.Load failed for download-only-396000: filestore "download-only-396000": Docker machine "download-only-396000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1208 10:09:55.077893    1606 driver.go:392] Setting default libvirt URI to qemu:///system
	W1208 10:09:55.077934    1606 start.go:810] api.Load failed for download-only-396000: filestore "download-only-396000": Docker machine "download-only-396000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1208 10:09:55.107665    1606 out.go:97] Using the hyperkit driver based on existing profile
	I1208 10:09:55.107705    1606 start.go:298] selected driver: hyperkit
	I1208 10:09:55.107713    1606 start.go:902] validating driver "hyperkit" against &{Name:download-only-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-396000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1208 10:09:55.107999    1606 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 10:09:55.108137    1606 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17738-1113/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1208 10:09:55.116352    1606 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1208 10:09:55.120156    1606 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:09:55.120188    1606 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1208 10:09:55.122908    1606 cni.go:84] Creating CNI manager for ""
	I1208 10:09:55.122929    1606 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1208 10:09:55.122944    1606 start_flags.go:323] config:
	{Name:download-only-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-396000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1208 10:09:55.123089    1606 iso.go:125] acquiring lock: {Name:mk933f5286cca8a935e46c54218c5cced15285e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 10:09:55.144743    1606 out.go:97] Starting control plane node download-only-396000 in cluster download-only-396000
	I1208 10:09:55.144779    1606 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1208 10:09:55.199917    1606 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1208 10:09:55.199948    1606 cache.go:56] Caching tarball of preloaded images
	I1208 10:09:55.200323    1606 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1208 10:09:55.221660    1606 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1208 10:09:55.221688    1606 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1208 10:09:55.297893    1606 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1208 10:10:02.363452    1606 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1208 10:10:02.363654    1606 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1208 10:10:02.984982    1606 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1208 10:10:02.985059    1606 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/download-only-396000/config.json ...
	I1208 10:10:02.985465    1606 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1208 10:10:02.985692    1606 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/darwin/amd64/v1.28.4/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-396000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/json-events (27.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-396000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.1 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-396000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.1 --container-runtime=docker --driver=hyperkit : (27.42359525s)
--- PASS: TestDownloadOnly/v1.29.0-rc.1/json-events (27.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-396000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-396000: exit status 85 (290.36478ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-396000 | jenkins | v1.32.0 | 08 Dec 23 10:09 PST |          |
	|         | -p download-only-396000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=hyperkit                 |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-396000 | jenkins | v1.32.0 | 08 Dec 23 10:09 PST |          |
	|         | -p download-only-396000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=hyperkit                 |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-396000 | jenkins | v1.32.0 | 08 Dec 23 10:10 PST |          |
	|         | -p download-only-396000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.1 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=hyperkit                 |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/08 10:10:07
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.4 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 10:10:07.791211    1619 out.go:296] Setting OutFile to fd 1 ...
	I1208 10:10:07.791501    1619 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 10:10:07.791507    1619 out.go:309] Setting ErrFile to fd 2...
	I1208 10:10:07.791511    1619 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 10:10:07.791699    1619 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17738-1113/.minikube/bin
	W1208 10:10:07.791792    1619 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17738-1113/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17738-1113/.minikube/config/config.json: no such file or directory
	I1208 10:10:07.793008    1619 out.go:303] Setting JSON to true
	I1208 10:10:07.815019    1619 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":563,"bootTime":1702058444,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1208 10:10:07.815111    1619 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1208 10:10:07.836203    1619 out.go:97] [download-only-396000] minikube v1.32.0 on Darwin 14.1.2
	I1208 10:10:07.856980    1619 out.go:169] MINIKUBE_LOCATION=17738
	I1208 10:10:07.836349    1619 notify.go:220] Checking for updates...
	I1208 10:10:07.900032    1619 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17738-1113/kubeconfig
	I1208 10:10:07.920981    1619 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1208 10:10:07.941862    1619 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 10:10:07.963108    1619 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17738-1113/.minikube
	W1208 10:10:08.004921    1619 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1208 10:10:08.005524    1619 config.go:182] Loaded profile config "download-only-396000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W1208 10:10:08.005588    1619 start.go:810] api.Load failed for download-only-396000: filestore "download-only-396000": Docker machine "download-only-396000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1208 10:10:08.005705    1619 driver.go:392] Setting default libvirt URI to qemu:///system
	W1208 10:10:08.005744    1619 start.go:810] api.Load failed for download-only-396000: filestore "download-only-396000": Docker machine "download-only-396000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1208 10:10:08.034964    1619 out.go:97] Using the hyperkit driver based on existing profile
	I1208 10:10:08.035036    1619 start.go:298] selected driver: hyperkit
	I1208 10:10:08.035047    1619 start.go:902] validating driver "hyperkit" against &{Name:download-only-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-396000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1208 10:10:08.035372    1619 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 10:10:08.035546    1619 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17738-1113/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1208 10:10:08.045036    1619 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1208 10:10:08.048876    1619 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:10:08.048897    1619 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1208 10:10:08.051676    1619 cni.go:84] Creating CNI manager for ""
	I1208 10:10:08.051698    1619 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1208 10:10:08.051712    1619 start_flags.go:323] config:
	{Name:download-only-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:download-only-396000 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1208 10:10:08.051851    1619 iso.go:125] acquiring lock: {Name:mk933f5286cca8a935e46c54218c5cced15285e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 10:10:08.072947    1619 out.go:97] Starting control plane node download-only-396000 in cluster download-only-396000
	I1208 10:10:08.072983    1619 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime docker
	I1208 10:10:08.139807    1619 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.1/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1208 10:10:08.139829    1619 cache.go:56] Caching tarball of preloaded images
	I1208 10:10:08.140090    1619 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime docker
	I1208 10:10:08.161127    1619 out.go:97] Downloading Kubernetes v1.29.0-rc.1 preload ...
	I1208 10:10:08.161153    1619 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-amd64.tar.lz4 ...
	I1208 10:10:08.241820    1619 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.1/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-amd64.tar.lz4?checksum=md5:83305d81dd014475bf9dbaaa661cddb4 -> /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1208 10:10:16.157770    1619 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-amd64.tar.lz4 ...
	I1208 10:10:16.158008    1619 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-amd64.tar.lz4 ...
	I1208 10:10:16.698041    1619 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.1 on docker
	I1208 10:10:16.698122    1619 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/download-only-396000/config.json ...
	I1208 10:10:16.698490    1619 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime docker
	I1208 10:10:16.698726    1619 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.1/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.1/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17738-1113/.minikube/cache/darwin/amd64/v1.29.0-rc.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-396000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.1/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.39s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-396000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestBinaryMirror (1.01s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-901000 --alsologtostderr --binary-mirror http://127.0.0.1:49353 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-901000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-901000
--- PASS: TestBinaryMirror (1.01s)

                                                
                                    
x
+
TestOffline (94.24s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-199000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-199000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : (1m28.91065649s)
helpers_test.go:175: Cleaning up "offline-docker-199000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-199000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-199000: (5.324253933s)
--- PASS: TestOffline (94.24s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-249000
addons_test.go:927: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-249000: exit status 85 (186.491348ms)

                                                
                                                
-- stdout --
	* Profile "addons-249000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-249000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-249000
addons_test.go:938: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-249000: exit status 85 (206.682382ms)

                                                
                                                
-- stdout --
	* Profile "addons-249000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-249000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestCertOptions (40.45s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-959000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-959000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : (34.824163233s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-959000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-959000 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-959000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-959000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-959000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-959000: (5.28099494s)
--- PASS: TestCertOptions (40.45s)

                                                
                                    
x
+
TestCertExpiration (252.23s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-926000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-926000 --memory=2048 --cert-expiration=3m --driver=hyperkit : (39.982485488s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-926000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E1208 10:44:46.170649    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/skaffold-097000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-926000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (26.95917953s)
helpers_test.go:175: Cleaning up "cert-expiration-926000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-926000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-926000: (5.286955407s)
--- PASS: TestCertExpiration (252.23s)

                                                
                                    
x
+
TestDockerFlags (40.33s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-264000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-264000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : (34.65577081s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-264000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-264000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-264000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-264000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-264000: (5.276141387s)
--- PASS: TestDockerFlags (40.33s)

                                                
                                    
x
+
TestForceSystemdFlag (40.53s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-993000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-993000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : (35.068633391s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-993000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-993000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-993000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-993000: (5.262003681s)
--- PASS: TestForceSystemdFlag (40.53s)

                                                
                                    
x
+
TestForceSystemdEnv (37.38s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-932000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-932000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : (33.82023701s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-932000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-932000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-932000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-932000: (3.386987954s)
--- PASS: TestForceSystemdEnv (37.38s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.00s)

                                                
                                    
x
+
TestErrorSpam/setup (33.83s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-011000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-011000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-011000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-011000 --driver=hyperkit : (33.828653997s)
--- PASS: TestErrorSpam/setup (33.83s)

                                                
                                    
x
+
TestErrorSpam/start (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-011000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-011000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-011000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-011000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-011000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-011000 start --dry-run
--- PASS: TestErrorSpam/start (1.54s)

                                                
                                    
x
+
TestErrorSpam/status (0.49s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-011000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-011000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-011000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-011000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-011000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-011000 status
--- PASS: TestErrorSpam/status (0.49s)

                                                
                                    
x
+
TestErrorSpam/pause (1.29s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-011000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-011000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-011000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-011000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-011000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-011000 pause
--- PASS: TestErrorSpam/pause (1.29s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.29s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-011000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-011000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-011000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-011000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-011000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-011000 unpause
--- PASS: TestErrorSpam/unpause (1.29s)

                                                
                                    
x
+
TestErrorSpam/stop (5.67s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-011000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-011000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-011000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-011000 stop: (5.232032782s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-011000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-011000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-011000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-011000 stop
--- PASS: TestErrorSpam/stop (5.67s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17738-1113/.minikube/files/etc/test/nested/copy/1585/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.56s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-688000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-688000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (52.561701093s)
--- PASS: TestFunctional/serial/StartWithProxy (52.56s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.14s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-688000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-688000 --alsologtostderr -v=8: (38.1396568s)
functional_test.go:659: soft start took 38.140141743s for "functional-688000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.14s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-688000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-688000 cache add registry.k8s.io/pause:3.1: (1.173813625s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-688000 cache add registry.k8s.io/pause:3.3: (1.092737579s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-688000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3660883818/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 cache add minikube-local-cache-test:functional-688000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 cache delete minikube-local-cache-test:functional-688000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-688000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-688000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (148.570886ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 kubectl -- --context functional-688000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.77s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-688000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.77s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.75s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-688000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-688000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.753892s)
functional_test.go:757: restart took 40.754044056s for "functional-688000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.75s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-688000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.62s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-688000 logs: (2.620485551s)
--- PASS: TestFunctional/serial/LogsCmd (2.62s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd2523705114/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-688000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd2523705114/001/logs.txt: (2.695423745s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.70s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.12s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-688000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-688000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-688000: exit status 115 (263.128129ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.5:30693 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-688000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.12s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-688000 config get cpus: exit status 14 (68.129745ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-688000 config get cpus: exit status 14 (56.225644ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-688000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-688000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2607: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.04s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-688000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-688000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (488.938621ms)

                                                
                                                
-- stdout --
	* [functional-688000] minikube v1.32.0 on Darwin 14.1.2
	  - MINIKUBE_LOCATION=17738
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17738-1113/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17738-1113/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 10:15:10.961407    2568 out.go:296] Setting OutFile to fd 1 ...
	I1208 10:15:10.961696    2568 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 10:15:10.961702    2568 out.go:309] Setting ErrFile to fd 2...
	I1208 10:15:10.961706    2568 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 10:15:10.961878    2568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17738-1113/.minikube/bin
	I1208 10:15:10.963249    2568 out.go:303] Setting JSON to false
	I1208 10:15:10.985582    2568 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":866,"bootTime":1702058444,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1208 10:15:10.985702    2568 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1208 10:15:11.007609    2568 out.go:177] * [functional-688000] minikube v1.32.0 on Darwin 14.1.2
	I1208 10:15:11.049279    2568 out.go:177]   - MINIKUBE_LOCATION=17738
	I1208 10:15:11.049386    2568 notify.go:220] Checking for updates...
	I1208 10:15:11.092871    2568 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17738-1113/kubeconfig
	I1208 10:15:11.114150    2568 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1208 10:15:11.135099    2568 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 10:15:11.155818    2568 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17738-1113/.minikube
	I1208 10:15:11.177083    2568 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 10:15:11.198582    2568 config.go:182] Loaded profile config "functional-688000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1208 10:15:11.198944    2568 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:15:11.198989    2568 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1208 10:15:11.207550    2568 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50230
	I1208 10:15:11.207915    2568 main.go:141] libmachine: () Calling .GetVersion
	I1208 10:15:11.208322    2568 main.go:141] libmachine: Using API Version  1
	I1208 10:15:11.208332    2568 main.go:141] libmachine: () Calling .SetConfigRaw
	I1208 10:15:11.208577    2568 main.go:141] libmachine: () Calling .GetMachineName
	I1208 10:15:11.208676    2568 main.go:141] libmachine: (functional-688000) Calling .DriverName
	I1208 10:15:11.208869    2568 driver.go:392] Setting default libvirt URI to qemu:///system
	I1208 10:15:11.209095    2568 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:15:11.209121    2568 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1208 10:15:11.217226    2568 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50232
	I1208 10:15:11.217574    2568 main.go:141] libmachine: () Calling .GetVersion
	I1208 10:15:11.217920    2568 main.go:141] libmachine: Using API Version  1
	I1208 10:15:11.217938    2568 main.go:141] libmachine: () Calling .SetConfigRaw
	I1208 10:15:11.218142    2568 main.go:141] libmachine: () Calling .GetMachineName
	I1208 10:15:11.218237    2568 main.go:141] libmachine: (functional-688000) Calling .DriverName
	I1208 10:15:11.245886    2568 out.go:177] * Using the hyperkit driver based on existing profile
	I1208 10:15:11.287184    2568 start.go:298] selected driver: hyperkit
	I1208 10:15:11.287207    2568 start.go:902] validating driver "hyperkit" against &{Name:functional-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-688000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.169.0.5 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1208 10:15:11.287436    2568 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 10:15:11.312143    2568 out.go:177] 
	W1208 10:15:11.333099    2568 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1208 10:15:11.354235    2568 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-688000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-688000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-688000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (461.935394ms)

                                                
                                                
-- stdout --
	* [functional-688000] minikube v1.32.0 sur Darwin 14.1.2
	  - MINIKUBE_LOCATION=17738
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17738-1113/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17738-1113/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 10:15:11.891649    2587 out.go:296] Setting OutFile to fd 1 ...
	I1208 10:15:11.891853    2587 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 10:15:11.891858    2587 out.go:309] Setting ErrFile to fd 2...
	I1208 10:15:11.891862    2587 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 10:15:11.892072    2587 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17738-1113/.minikube/bin
	I1208 10:15:11.893645    2587 out.go:303] Setting JSON to false
	I1208 10:15:11.915845    2587 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":867,"bootTime":1702058444,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1208 10:15:11.915955    2587 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1208 10:15:11.937138    2587 out.go:177] * [functional-688000] minikube v1.32.0 sur Darwin 14.1.2
	I1208 10:15:11.979330    2587 out.go:177]   - MINIKUBE_LOCATION=17738
	I1208 10:15:11.999997    2587 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17738-1113/kubeconfig
	I1208 10:15:11.979410    2587 notify.go:220] Checking for updates...
	I1208 10:15:12.021168    2587 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1208 10:15:12.042126    2587 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 10:15:12.062968    2587 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17738-1113/.minikube
	I1208 10:15:12.084206    2587 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 10:15:12.105988    2587 config.go:182] Loaded profile config "functional-688000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1208 10:15:12.106687    2587 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:15:12.106767    2587 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1208 10:15:12.115806    2587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50240
	I1208 10:15:12.116159    2587 main.go:141] libmachine: () Calling .GetVersion
	I1208 10:15:12.116580    2587 main.go:141] libmachine: Using API Version  1
	I1208 10:15:12.116592    2587 main.go:141] libmachine: () Calling .SetConfigRaw
	I1208 10:15:12.116810    2587 main.go:141] libmachine: () Calling .GetMachineName
	I1208 10:15:12.116963    2587 main.go:141] libmachine: (functional-688000) Calling .DriverName
	I1208 10:15:12.117165    2587 driver.go:392] Setting default libvirt URI to qemu:///system
	I1208 10:15:12.117403    2587 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:15:12.117426    2587 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1208 10:15:12.125296    2587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50242
	I1208 10:15:12.125628    2587 main.go:141] libmachine: () Calling .GetVersion
	I1208 10:15:12.125987    2587 main.go:141] libmachine: Using API Version  1
	I1208 10:15:12.126013    2587 main.go:141] libmachine: () Calling .SetConfigRaw
	I1208 10:15:12.126205    2587 main.go:141] libmachine: () Calling .GetMachineName
	I1208 10:15:12.126304    2587 main.go:141] libmachine: (functional-688000) Calling .DriverName
	I1208 10:15:12.154010    2587 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I1208 10:15:12.196194    2587 start.go:298] selected driver: hyperkit
	I1208 10:15:12.196220    2587 start.go:902] validating driver "hyperkit" against &{Name:functional-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-688000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.169.0.5 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1208 10:15:12.196475    2587 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 10:15:12.221037    2587 out.go:177] 
	W1208 10:15:12.242258    2587 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1208 10:15:12.263264    2587 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-688000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-688000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-fjc4f" [3a60a4c7-694f-4a7b-b5a2-c68463a82223] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-fjc4f" [3a60a4c7-694f-4a7b-b5a2-c68463a82223] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.013142129s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.169.0.5:31528
functional_test.go:1674: http://192.169.0.5:31528: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-fjc4f

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.5:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.5:31528
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [59695584-be01-4f2c-a08f-3c267617c8a4] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009809794s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-688000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-688000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-688000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-688000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1a3b4667-3ada-4ccc-8b5b-8e8a08d69eab] Pending
helpers_test.go:344: "sp-pod" [1a3b4667-3ada-4ccc-8b5b-8e8a08d69eab] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1a3b4667-3ada-4ccc-8b5b-8e8a08d69eab] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.011275326s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-688000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-688000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-688000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ae49f367-bb89-46a5-8afa-9d167328d057] Pending
helpers_test.go:344: "sp-pod" [ae49f367-bb89-46a5-8afa-9d167328d057] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ae49f367-bb89-46a5-8afa-9d167328d057] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.021757536s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-688000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.45s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh -n functional-688000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 cp functional-688000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd3685324038/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh -n functional-688000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-688000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-5csrv" [6bbdf9ab-ea39-48db-9dbf-0d0c96a8f204] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-5csrv" [6bbdf9ab-ea39-48db-9dbf-0d0c96a8f204] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.021220857s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-688000 exec mysql-859648c796-5csrv -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-688000 exec mysql-859648c796-5csrv -- mysql -ppassword -e "show databases;": exit status 1 (149.244009ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-688000 exec mysql-859648c796-5csrv -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-688000 exec mysql-859648c796-5csrv -- mysql -ppassword -e "show databases;": exit status 1 (110.536577ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-688000 exec mysql-859648c796-5csrv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.88s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1585/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh "sudo cat /etc/test/nested/copy/1585/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1585.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh "sudo cat /etc/ssl/certs/1585.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1585.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh "sudo cat /usr/share/ca-certificates/1585.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/15852.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh "sudo cat /etc/ssl/certs/15852.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/15852.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh "sudo cat /usr/share/ca-certificates/15852.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-688000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-688000 ssh "sudo systemctl is-active crio": exit status 1 (136.98604ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-688000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-688000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-688000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-688000 image ls --format short --alsologtostderr:
I1208 10:15:14.329688    2621 out.go:296] Setting OutFile to fd 1 ...
I1208 10:15:14.330019    2621 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1208 10:15:14.330025    2621 out.go:309] Setting ErrFile to fd 2...
I1208 10:15:14.330030    2621 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1208 10:15:14.330223    2621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17738-1113/.minikube/bin
I1208 10:15:14.330851    2621 config.go:182] Loaded profile config "functional-688000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1208 10:15:14.330946    2621 config.go:182] Loaded profile config "functional-688000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1208 10:15:14.331325    2621 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1208 10:15:14.331400    2621 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1208 10:15:14.339342    2621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50297
I1208 10:15:14.339774    2621 main.go:141] libmachine: () Calling .GetVersion
I1208 10:15:14.340225    2621 main.go:141] libmachine: Using API Version  1
I1208 10:15:14.340255    2621 main.go:141] libmachine: () Calling .SetConfigRaw
I1208 10:15:14.340497    2621 main.go:141] libmachine: () Calling .GetMachineName
I1208 10:15:14.340632    2621 main.go:141] libmachine: (functional-688000) Calling .GetState
I1208 10:15:14.340739    2621 main.go:141] libmachine: (functional-688000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1208 10:15:14.340830    2621 main.go:141] libmachine: (functional-688000) DBG | hyperkit pid from json: 1858
I1208 10:15:14.342157    2621 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1208 10:15:14.342183    2621 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1208 10:15:14.350238    2621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50299
I1208 10:15:14.350590    2621 main.go:141] libmachine: () Calling .GetVersion
I1208 10:15:14.350972    2621 main.go:141] libmachine: Using API Version  1
I1208 10:15:14.350992    2621 main.go:141] libmachine: () Calling .SetConfigRaw
I1208 10:15:14.351199    2621 main.go:141] libmachine: () Calling .GetMachineName
I1208 10:15:14.351303    2621 main.go:141] libmachine: (functional-688000) Calling .DriverName
I1208 10:15:14.351457    2621 ssh_runner.go:195] Run: systemctl --version
I1208 10:15:14.351476    2621 main.go:141] libmachine: (functional-688000) Calling .GetSSHHostname
I1208 10:15:14.351552    2621 main.go:141] libmachine: (functional-688000) Calling .GetSSHPort
I1208 10:15:14.351638    2621 main.go:141] libmachine: (functional-688000) Calling .GetSSHKeyPath
I1208 10:15:14.351723    2621 main.go:141] libmachine: (functional-688000) Calling .GetSSHUsername
I1208 10:15:14.351819    2621 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/functional-688000/id_rsa Username:docker}
I1208 10:15:14.389748    2621 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1208 10:15:14.406096    2621 main.go:141] libmachine: Making call to close driver server
I1208 10:15:14.406106    2621 main.go:141] libmachine: (functional-688000) Calling .Close
I1208 10:15:14.406265    2621 main.go:141] libmachine: Successfully made call to close driver server
I1208 10:15:14.406274    2621 main.go:141] libmachine: Making call to close connection to plugin binary
I1208 10:15:14.406280    2621 main.go:141] libmachine: Making call to close driver server
I1208 10:15:14.406285    2621 main.go:141] libmachine: (functional-688000) Calling .Close
I1208 10:15:14.406407    2621 main.go:141] libmachine: Successfully made call to close driver server
I1208 10:15:14.406419    2621 main.go:141] libmachine: Making call to close connection to plugin binary
I1208 10:15:14.406422    2621 main.go:141] libmachine: (functional-688000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-688000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/localhost/my-image                | functional-688000 | 0b393b9c554f7 | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-688000 | d794c8d5e1d8a | 30B    |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/google-containers/addon-resizer      | functional-688000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | latest            | a6bd71f48f683 | 187MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | alpine            | 01e5c69afaf63 | 42.6MB |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| docker.io/library/mysql                     | 5.7               | bdba757bc9336 | 501MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-688000 image ls --format table --alsologtostderr:
I1208 10:15:17.130324    2648 out.go:296] Setting OutFile to fd 1 ...
I1208 10:15:17.130537    2648 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1208 10:15:17.130543    2648 out.go:309] Setting ErrFile to fd 2...
I1208 10:15:17.130548    2648 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1208 10:15:17.130733    2648 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17738-1113/.minikube/bin
I1208 10:15:17.131341    2648 config.go:182] Loaded profile config "functional-688000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1208 10:15:17.131437    2648 config.go:182] Loaded profile config "functional-688000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1208 10:15:17.131803    2648 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1208 10:15:17.131853    2648 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1208 10:15:17.139604    2648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50332
I1208 10:15:17.140056    2648 main.go:141] libmachine: () Calling .GetVersion
I1208 10:15:17.140478    2648 main.go:141] libmachine: Using API Version  1
I1208 10:15:17.140488    2648 main.go:141] libmachine: () Calling .SetConfigRaw
I1208 10:15:17.140684    2648 main.go:141] libmachine: () Calling .GetMachineName
I1208 10:15:17.140781    2648 main.go:141] libmachine: (functional-688000) Calling .GetState
I1208 10:15:17.140860    2648 main.go:141] libmachine: (functional-688000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1208 10:15:17.140935    2648 main.go:141] libmachine: (functional-688000) DBG | hyperkit pid from json: 1858
I1208 10:15:17.142231    2648 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1208 10:15:17.142253    2648 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1208 10:15:17.150102    2648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50334
I1208 10:15:17.150457    2648 main.go:141] libmachine: () Calling .GetVersion
I1208 10:15:17.150800    2648 main.go:141] libmachine: Using API Version  1
I1208 10:15:17.150812    2648 main.go:141] libmachine: () Calling .SetConfigRaw
I1208 10:15:17.151067    2648 main.go:141] libmachine: () Calling .GetMachineName
I1208 10:15:17.151200    2648 main.go:141] libmachine: (functional-688000) Calling .DriverName
I1208 10:15:17.151360    2648 ssh_runner.go:195] Run: systemctl --version
I1208 10:15:17.151380    2648 main.go:141] libmachine: (functional-688000) Calling .GetSSHHostname
I1208 10:15:17.151464    2648 main.go:141] libmachine: (functional-688000) Calling .GetSSHPort
I1208 10:15:17.151557    2648 main.go:141] libmachine: (functional-688000) Calling .GetSSHKeyPath
I1208 10:15:17.151633    2648 main.go:141] libmachine: (functional-688000) Calling .GetSSHUsername
I1208 10:15:17.151708    2648 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/functional-688000/id_rsa Username:docker}
I1208 10:15:17.186216    2648 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1208 10:15:17.204823    2648 main.go:141] libmachine: Making call to close driver server
I1208 10:15:17.204834    2648 main.go:141] libmachine: (functional-688000) Calling .Close
I1208 10:15:17.204992    2648 main.go:141] libmachine: (functional-688000) DBG | Closing plugin on server side
I1208 10:15:17.204999    2648 main.go:141] libmachine: Successfully made call to close driver server
I1208 10:15:17.205008    2648 main.go:141] libmachine: Making call to close connection to plugin binary
I1208 10:15:17.205013    2648 main.go:141] libmachine: Making call to close driver server
I1208 10:15:17.205019    2648 main.go:141] libmachine: (functional-688000) Calling .Close
I1208 10:15:17.205150    2648 main.go:141] libmachine: Successfully made call to close driver server
I1208 10:15:17.205158    2648 main.go:141] libmachine: Making call to close connection to plugin binary
I1208 10:15:17.205183    2648 main.go:141] libmachine: (functional-688000) DBG | Closing plugin on server side
2023/12/08 10:15:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-688000 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094e
b226076436f258c","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredn
s:v1.10.1"],"size":"53600000"},{"id":"01e5c69afaf635f66aab0b59404a0ac72db1e2e519c3f41a1ff53d37c35bba41","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"d794c8d5e1d8af5826f0d32699a205f6563637db359654500141f7100f1ac84b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-688000"],"size":"30"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-688000"],"size":"32900000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"
4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"0b393b9c554f7ae7862d8adc56489293b70ee6d293990fb4d8050a75f72872ed","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-688000"],"size":"1240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-688000 image ls --format json --alsologtostderr:
I1208 10:15:16.978743    2644 out.go:296] Setting OutFile to fd 1 ...
I1208 10:15:16.978964    2644 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1208 10:15:16.978970    2644 out.go:309] Setting ErrFile to fd 2...
I1208 10:15:16.978974    2644 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1208 10:15:16.979169    2644 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17738-1113/.minikube/bin
I1208 10:15:16.979776    2644 config.go:182] Loaded profile config "functional-688000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1208 10:15:16.979867    2644 config.go:182] Loaded profile config "functional-688000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1208 10:15:16.980205    2644 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1208 10:15:16.980256    2644 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1208 10:15:16.987966    2644 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50327
I1208 10:15:16.988380    2644 main.go:141] libmachine: () Calling .GetVersion
I1208 10:15:16.988826    2644 main.go:141] libmachine: Using API Version  1
I1208 10:15:16.988852    2644 main.go:141] libmachine: () Calling .SetConfigRaw
I1208 10:15:16.989110    2644 main.go:141] libmachine: () Calling .GetMachineName
I1208 10:15:16.989230    2644 main.go:141] libmachine: (functional-688000) Calling .GetState
I1208 10:15:16.989318    2644 main.go:141] libmachine: (functional-688000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1208 10:15:16.989386    2644 main.go:141] libmachine: (functional-688000) DBG | hyperkit pid from json: 1858
I1208 10:15:16.990650    2644 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1208 10:15:16.990688    2644 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1208 10:15:16.998529    2644 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50329
I1208 10:15:16.998871    2644 main.go:141] libmachine: () Calling .GetVersion
I1208 10:15:16.999265    2644 main.go:141] libmachine: Using API Version  1
I1208 10:15:16.999279    2644 main.go:141] libmachine: () Calling .SetConfigRaw
I1208 10:15:16.999498    2644 main.go:141] libmachine: () Calling .GetMachineName
I1208 10:15:16.999606    2644 main.go:141] libmachine: (functional-688000) Calling .DriverName
I1208 10:15:16.999760    2644 ssh_runner.go:195] Run: systemctl --version
I1208 10:15:16.999780    2644 main.go:141] libmachine: (functional-688000) Calling .GetSSHHostname
I1208 10:15:16.999854    2644 main.go:141] libmachine: (functional-688000) Calling .GetSSHPort
I1208 10:15:16.999928    2644 main.go:141] libmachine: (functional-688000) Calling .GetSSHKeyPath
I1208 10:15:17.000003    2644 main.go:141] libmachine: (functional-688000) Calling .GetSSHUsername
I1208 10:15:17.000082    2644 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/functional-688000/id_rsa Username:docker}
I1208 10:15:17.034433    2644 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1208 10:15:17.051351    2644 main.go:141] libmachine: Making call to close driver server
I1208 10:15:17.051363    2644 main.go:141] libmachine: (functional-688000) Calling .Close
I1208 10:15:17.051526    2644 main.go:141] libmachine: (functional-688000) DBG | Closing plugin on server side
I1208 10:15:17.051556    2644 main.go:141] libmachine: Successfully made call to close driver server
I1208 10:15:17.051578    2644 main.go:141] libmachine: Making call to close connection to plugin binary
I1208 10:15:17.051590    2644 main.go:141] libmachine: Making call to close driver server
I1208 10:15:17.051598    2644 main.go:141] libmachine: (functional-688000) Calling .Close
I1208 10:15:17.051744    2644 main.go:141] libmachine: (functional-688000) DBG | Closing plugin on server side
I1208 10:15:17.051759    2644 main.go:141] libmachine: Successfully made call to close driver server
I1208 10:15:17.051772    2644 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-688000 image ls --format yaml --alsologtostderr:
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 0b393b9c554f7ae7862d8adc56489293b70ee6d293990fb4d8050a75f72872ed
repoDigests: []
repoTags:
- docker.io/localhost/my-image:functional-688000
size: "1240000"
- id: 01e5c69afaf635f66aab0b59404a0ac72db1e2e519c3f41a1ff53d37c35bba41
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1240000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: d794c8d5e1d8af5826f0d32699a205f6563637db359654500141f7100f1ac84b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-688000
size: "30"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-688000
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-688000 image ls --format yaml --alsologtostderr:
I1208 10:15:16.826923    2640 out.go:296] Setting OutFile to fd 1 ...
I1208 10:15:16.827220    2640 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1208 10:15:16.827225    2640 out.go:309] Setting ErrFile to fd 2...
I1208 10:15:16.827229    2640 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1208 10:15:16.827404    2640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17738-1113/.minikube/bin
I1208 10:15:16.828004    2640 config.go:182] Loaded profile config "functional-688000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1208 10:15:16.828096    2640 config.go:182] Loaded profile config "functional-688000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1208 10:15:16.828523    2640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1208 10:15:16.828588    2640 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1208 10:15:16.836151    2640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50322
I1208 10:15:16.836599    2640 main.go:141] libmachine: () Calling .GetVersion
I1208 10:15:16.837014    2640 main.go:141] libmachine: Using API Version  1
I1208 10:15:16.837024    2640 main.go:141] libmachine: () Calling .SetConfigRaw
I1208 10:15:16.837269    2640 main.go:141] libmachine: () Calling .GetMachineName
I1208 10:15:16.837383    2640 main.go:141] libmachine: (functional-688000) Calling .GetState
I1208 10:15:16.837472    2640 main.go:141] libmachine: (functional-688000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1208 10:15:16.837525    2640 main.go:141] libmachine: (functional-688000) DBG | hyperkit pid from json: 1858
I1208 10:15:16.838798    2640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1208 10:15:16.838821    2640 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1208 10:15:16.846531    2640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50324
I1208 10:15:16.846906    2640 main.go:141] libmachine: () Calling .GetVersion
I1208 10:15:16.847254    2640 main.go:141] libmachine: Using API Version  1
I1208 10:15:16.847268    2640 main.go:141] libmachine: () Calling .SetConfigRaw
I1208 10:15:16.847456    2640 main.go:141] libmachine: () Calling .GetMachineName
I1208 10:15:16.847551    2640 main.go:141] libmachine: (functional-688000) Calling .DriverName
I1208 10:15:16.847702    2640 ssh_runner.go:195] Run: systemctl --version
I1208 10:15:16.847721    2640 main.go:141] libmachine: (functional-688000) Calling .GetSSHHostname
I1208 10:15:16.847801    2640 main.go:141] libmachine: (functional-688000) Calling .GetSSHPort
I1208 10:15:16.847883    2640 main.go:141] libmachine: (functional-688000) Calling .GetSSHKeyPath
I1208 10:15:16.847961    2640 main.go:141] libmachine: (functional-688000) Calling .GetSSHUsername
I1208 10:15:16.848068    2640 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/functional-688000/id_rsa Username:docker}
I1208 10:15:16.883467    2640 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1208 10:15:16.899544    2640 main.go:141] libmachine: Making call to close driver server
I1208 10:15:16.899553    2640 main.go:141] libmachine: (functional-688000) Calling .Close
I1208 10:15:16.899701    2640 main.go:141] libmachine: Successfully made call to close driver server
I1208 10:15:16.899715    2640 main.go:141] libmachine: Making call to close connection to plugin binary
I1208 10:15:16.899722    2640 main.go:141] libmachine: Making call to close driver server
I1208 10:15:16.899721    2640 main.go:141] libmachine: (functional-688000) DBG | Closing plugin on server side
I1208 10:15:16.899730    2640 main.go:141] libmachine: (functional-688000) Calling .Close
I1208 10:15:16.899840    2640 main.go:141] libmachine: Successfully made call to close driver server
I1208 10:15:16.899851    2640 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-688000 ssh pgrep buildkitd: exit status 1 (124.420544ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 image build -t localhost/my-image:functional-688000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-688000 image build -t localhost/my-image:functional-688000 testdata/build --alsologtostderr: (2.065190542s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-688000 image build -t localhost/my-image:functional-688000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in c8b1d39384be
Removing intermediate container c8b1d39384be
---> 0206ecba165b
Step 3/3 : ADD content.txt /
---> 0b393b9c554f
Successfully built 0b393b9c554f
Successfully tagged localhost/my-image:functional-688000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-688000 image build -t localhost/my-image:functional-688000 testdata/build --alsologtostderr:
I1208 10:15:14.609821    2630 out.go:296] Setting OutFile to fd 1 ...
I1208 10:15:14.610143    2630 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1208 10:15:14.610149    2630 out.go:309] Setting ErrFile to fd 2...
I1208 10:15:14.610153    2630 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1208 10:15:14.610347    2630 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17738-1113/.minikube/bin
I1208 10:15:14.610940    2630 config.go:182] Loaded profile config "functional-688000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1208 10:15:14.611535    2630 config.go:182] Loaded profile config "functional-688000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1208 10:15:14.611934    2630 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1208 10:15:14.611970    2630 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1208 10:15:14.619711    2630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50309
I1208 10:15:14.620096    2630 main.go:141] libmachine: () Calling .GetVersion
I1208 10:15:14.620525    2630 main.go:141] libmachine: Using API Version  1
I1208 10:15:14.620535    2630 main.go:141] libmachine: () Calling .SetConfigRaw
I1208 10:15:14.620736    2630 main.go:141] libmachine: () Calling .GetMachineName
I1208 10:15:14.620834    2630 main.go:141] libmachine: (functional-688000) Calling .GetState
I1208 10:15:14.620921    2630 main.go:141] libmachine: (functional-688000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1208 10:15:14.620983    2630 main.go:141] libmachine: (functional-688000) DBG | hyperkit pid from json: 1858
I1208 10:15:14.622265    2630 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1208 10:15:14.622288    2630 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1208 10:15:14.630067    2630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50311
I1208 10:15:14.630424    2630 main.go:141] libmachine: () Calling .GetVersion
I1208 10:15:14.630756    2630 main.go:141] libmachine: Using API Version  1
I1208 10:15:14.630767    2630 main.go:141] libmachine: () Calling .SetConfigRaw
I1208 10:15:14.630971    2630 main.go:141] libmachine: () Calling .GetMachineName
I1208 10:15:14.631083    2630 main.go:141] libmachine: (functional-688000) Calling .DriverName
I1208 10:15:14.631234    2630 ssh_runner.go:195] Run: systemctl --version
I1208 10:15:14.631255    2630 main.go:141] libmachine: (functional-688000) Calling .GetSSHHostname
I1208 10:15:14.631339    2630 main.go:141] libmachine: (functional-688000) Calling .GetSSHPort
I1208 10:15:14.631420    2630 main.go:141] libmachine: (functional-688000) Calling .GetSSHKeyPath
I1208 10:15:14.631500    2630 main.go:141] libmachine: (functional-688000) Calling .GetSSHUsername
I1208 10:15:14.631584    2630 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/functional-688000/id_rsa Username:docker}
I1208 10:15:14.666668    2630 build_images.go:151] Building image from path: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.2886817883.tar
I1208 10:15:14.666739    2630 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1208 10:15:14.673566    2630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2886817883.tar
I1208 10:15:14.676507    2630 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2886817883.tar: stat -c "%s %y" /var/lib/minikube/build/build.2886817883.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2886817883.tar': No such file or directory
I1208 10:15:14.676530    2630 ssh_runner.go:362] scp /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.2886817883.tar --> /var/lib/minikube/build/build.2886817883.tar (3072 bytes)
I1208 10:15:14.693277    2630 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2886817883
I1208 10:15:14.699747    2630 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2886817883 -xf /var/lib/minikube/build/build.2886817883.tar
I1208 10:15:14.706203    2630 docker.go:346] Building image: /var/lib/minikube/build/build.2886817883
I1208 10:15:14.706259    2630 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-688000 /var/lib/minikube/build/build.2886817883
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1208 10:15:16.579255    2630 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-688000 /var/lib/minikube/build/build.2886817883: (1.872973189s)
I1208 10:15:16.579329    2630 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2886817883
I1208 10:15:16.585853    2630 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2886817883.tar
I1208 10:15:16.591873    2630 build_images.go:207] Built localhost/my-image:functional-688000 from /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.2886817883.tar
I1208 10:15:16.591895    2630 build_images.go:123] succeeded building to: functional-688000
I1208 10:15:16.591899    2630 build_images.go:124] failed building to: 
I1208 10:15:16.591932    2630 main.go:141] libmachine: Making call to close driver server
I1208 10:15:16.591941    2630 main.go:141] libmachine: (functional-688000) Calling .Close
I1208 10:15:16.592088    2630 main.go:141] libmachine: Successfully made call to close driver server
I1208 10:15:16.592103    2630 main.go:141] libmachine: Making call to close connection to plugin binary
I1208 10:15:16.592108    2630 main.go:141] libmachine: Making call to close driver server
I1208 10:15:16.592132    2630 main.go:141] libmachine: (functional-688000) Calling .Close
I1208 10:15:16.592109    2630 main.go:141] libmachine: (functional-688000) DBG | Closing plugin on server side
I1208 10:15:16.592254    2630 main.go:141] libmachine: Successfully made call to close driver server
I1208 10:15:16.592266    2630 main.go:141] libmachine: Making call to close connection to plugin binary
I1208 10:15:16.592302    2630 main.go:141] libmachine: (functional-688000) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.487193618s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-688000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.57s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-688000 docker-env) && out/minikube-darwin-amd64 status -p functional-688000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-688000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 image load --daemon gcr.io/google-containers/addon-resizer:functional-688000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-688000 image load --daemon gcr.io/google-containers/addon-resizer:functional-688000 --alsologtostderr: (3.260186707s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 image load --daemon gcr.io/google-containers/addon-resizer:functional-688000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-688000 image load --daemon gcr.io/google-containers/addon-resizer:functional-688000 --alsologtostderr: (1.921689529s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.951375957s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-688000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 image load --daemon gcr.io/google-containers/addon-resizer:functional-688000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-688000 image load --daemon gcr.io/google-containers/addon-resizer:functional-688000 --alsologtostderr: (3.22431036s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 image save gcr.io/google-containers/addon-resizer:functional-688000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-688000 image save gcr.io/google-containers/addon-resizer:functional-688000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.084932022s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 image rm gcr.io/google-containers/addon-resizer:functional-688000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-688000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.155611856s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-688000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 image save --daemon gcr.io/google-containers/addon-resizer:functional-688000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-688000 image save --daemon gcr.io/google-containers/addon-resizer:functional-688000 --alsologtostderr: (1.194293895s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-688000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-688000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-688000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-xczl7" [eddf3352-9f03-49f9-8cf3-afeb69fb70ae] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-xczl7" [eddf3352-9f03-49f9-8cf3-afeb69fb70ae] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.022467469s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-688000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-688000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-688000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-688000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2319: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-688000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-688000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [663ad8d9-dd93-499e-b7f5-654a1e637ff8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [663ad8d9-dd93-499e-b7f5-654a1e637ff8] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.009134278s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 service list -o json
functional_test.go:1493: Took "370.291638ms" to run "out/minikube-darwin-amd64 -p functional-688000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.169.0.5:30119
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.169.0.5:30119
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-688000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.20.88 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-688000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1314: Took "199.653957ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1328: Took "78.484984ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1365: Took "195.611361ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1378: Took "77.191819ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-688000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port489390573/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1702059301965643000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port489390573/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1702059301965643000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port489390573/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1702059301965643000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port489390573/001/test-1702059301965643000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-688000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (151.935358ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  8 18:15 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  8 18:15 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  8 18:15 test-1702059301965643000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh cat /mount-9p/test-1702059301965643000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-688000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [eba55063-ccb6-4192-8c28-9602e0fd81fe] Pending
helpers_test.go:344: "busybox-mount" [eba55063-ccb6-4192-8c28-9602e0fd81fe] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [eba55063-ccb6-4192-8c28-9602e0fd81fe] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [eba55063-ccb6-4192-8c28-9602e0fd81fe] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.013408778s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-688000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-688000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port489390573/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-688000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port4174341180/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-688000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (170.559511ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-688000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port4174341180/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-688000 ssh "sudo umount -f /mount-9p": exit status 1 (123.784865ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-688000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-688000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port4174341180/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-688000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3147760811/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-688000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3147760811/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-688000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3147760811/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-688000 ssh "findmnt -T" /mount1: exit status 1 (167.054326ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-688000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-688000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-688000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3147760811/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-688000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3147760811/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-688000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3147760811/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.42s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-688000
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-688000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-688000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (37.99s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-915000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-915000 --driver=hyperkit : (37.991600138s)
--- PASS: TestImageBuild/serial/Setup (37.99s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.34s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-915000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-915000: (1.335715525s)
--- PASS: TestImageBuild/serial/NormalBuild (1.34s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.74s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-915000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.74s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.24s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-915000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.24s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-915000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.22s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (73.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-251000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-amd64 start -p ingress-addon-legacy-251000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit : (1m13.504179043s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (73.50s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.84s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-251000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-251000 addons enable ingress --alsologtostderr -v=5: (14.843521593s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.84s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-251000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (30.79s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-251000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-251000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (10.757454297s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-251000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-251000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [04e75ecd-8931-49e9-9ad4-49dbedaf944e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [04e75ecd-8931-49e9-9ad4-49dbedaf944e] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.006440572s
addons_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-251000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-251000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-251000 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.169.0.7
addons_test.go:305: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-251000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-251000 addons disable ingress-dns --alsologtostderr -v=1: (1.755028418s)
addons_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-251000 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-251000 addons disable ingress --alsologtostderr -v=1: (7.362650899s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (30.79s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.11s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-102000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-102000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (50.110244081s)
--- PASS: TestJSONOutput/start/Command (50.11s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.47s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-102000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.47s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-102000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.16s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-102000 --output=json --user=testUser
E1208 10:19:15.031503    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
E1208 10:19:15.039107    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
E1208 10:19:15.049577    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
E1208 10:19:15.071764    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
E1208 10:19:15.113069    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
E1208 10:19:15.194440    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
E1208 10:19:15.356627    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
E1208 10:19:15.677674    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
E1208 10:19:16.318455    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
E1208 10:19:17.599960    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-102000 --output=json --user=testUser: (8.163150517s)
--- PASS: TestJSONOutput/stop/Command (8.16s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.8s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-894000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-894000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (426.752053ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"46f0fb30-2911-4c27-8048-57526afa4faa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-894000] minikube v1.32.0 on Darwin 14.1.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b63e313b-5cd6-4df7-ad94-cca1cfefc159","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17738"}}
	{"specversion":"1.0","id":"b2dba7f6-bf8a-4fad-8e06-8028151cbedc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17738-1113/kubeconfig"}}
	{"specversion":"1.0","id":"2df38716-5a32-418a-88f4-48653ac6ed09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"2b3f1636-78af-46dc-a8d3-7e4c6c159035","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ac028d43-55c8-455f-b20a-f2f34493e845","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17738-1113/.minikube"}}
	{"specversion":"1.0","id":"ada378a7-8739-4ccd-a45b-520d75696468","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0f89be89-aed9-4f23-a6f6-62ca14d22350","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-894000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-894000
--- PASS: TestErrorJSONOutput (0.80s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (16.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-291000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
E1208 10:20:36.965497    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-291000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : (15.108527725s)
--- PASS: TestMountStart/serial/StartWithMountFirst (16.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-291000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-291000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (16.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-300000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-300000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit : (15.41191827s)
--- PASS: TestMountStart/serial/StartWithMountSecond (16.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-300000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-300000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.35s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-291000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-291000 --alsologtostderr -v=5: (2.351846084s)
--- PASS: TestMountStart/serial/DeleteFirst (2.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-300000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-300000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-300000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-300000: (2.233238564s)
--- PASS: TestMountStart/serial/Stop (2.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (16.43s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-300000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-300000: (15.427076997s)
--- PASS: TestMountStart/serial/RestartStopped (16.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-300000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-300000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (160.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-261000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E1208 10:21:58.887684    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
E1208 10:22:44.651373    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
E1208 10:22:44.657213    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
E1208 10:22:44.668139    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
E1208 10:22:44.688728    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
E1208 10:22:44.729022    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
E1208 10:22:44.811041    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
E1208 10:22:44.972432    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
E1208 10:22:45.292595    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
E1208 10:22:45.933241    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
E1208 10:22:47.214370    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
E1208 10:22:49.775208    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
E1208 10:22:54.895367    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
E1208 10:23:05.137346    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
E1208 10:23:25.618414    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-261000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (2m40.142545174s)
multinode_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (160.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-261000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-261000 -- rollout status deployment/busybox
E1208 10:24:06.579113    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
multinode_test.go:514: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-261000 -- rollout status deployment/busybox: (2.806831401s)
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-261000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-261000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-261000 -- exec busybox-5bc68d56bd-77kww -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-261000 -- exec busybox-5bc68d56bd-wn2nq -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-261000 -- exec busybox-5bc68d56bd-77kww -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-261000 -- exec busybox-5bc68d56bd-wn2nq -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-261000 -- exec busybox-5bc68d56bd-77kww -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-261000 -- exec busybox-5bc68d56bd-wn2nq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.44s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-261000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-261000 -- exec busybox-5bc68d56bd-77kww -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-261000 -- exec busybox-5bc68d56bd-77kww -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:588: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-261000 -- exec busybox-5bc68d56bd-wn2nq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-261000 -- exec busybox-5bc68d56bd-wn2nq -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (32.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-261000 -v 3 --alsologtostderr
E1208 10:24:15.031877    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-261000 -v 3 --alsologtostderr: (32.314630383s)
multinode_test.go:117: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (32.64s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-261000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
E1208 10:24:42.729489    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 cp testdata/cp-test.txt multinode-261000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 ssh -n multinode-261000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 cp multinode-261000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile3218004843/001/cp-test_multinode-261000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 ssh -n multinode-261000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 cp multinode-261000:/home/docker/cp-test.txt multinode-261000-m02:/home/docker/cp-test_multinode-261000_multinode-261000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 ssh -n multinode-261000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 ssh -n multinode-261000-m02 "sudo cat /home/docker/cp-test_multinode-261000_multinode-261000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 cp multinode-261000:/home/docker/cp-test.txt multinode-261000-m03:/home/docker/cp-test_multinode-261000_multinode-261000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 ssh -n multinode-261000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 ssh -n multinode-261000-m03 "sudo cat /home/docker/cp-test_multinode-261000_multinode-261000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 cp testdata/cp-test.txt multinode-261000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 ssh -n multinode-261000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 cp multinode-261000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile3218004843/001/cp-test_multinode-261000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 ssh -n multinode-261000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 cp multinode-261000-m02:/home/docker/cp-test.txt multinode-261000:/home/docker/cp-test_multinode-261000-m02_multinode-261000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 ssh -n multinode-261000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 ssh -n multinode-261000 "sudo cat /home/docker/cp-test_multinode-261000-m02_multinode-261000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 cp multinode-261000-m02:/home/docker/cp-test.txt multinode-261000-m03:/home/docker/cp-test_multinode-261000-m02_multinode-261000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 ssh -n multinode-261000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 ssh -n multinode-261000-m03 "sudo cat /home/docker/cp-test_multinode-261000-m02_multinode-261000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 cp testdata/cp-test.txt multinode-261000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 ssh -n multinode-261000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 cp multinode-261000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile3218004843/001/cp-test_multinode-261000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 ssh -n multinode-261000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 cp multinode-261000-m03:/home/docker/cp-test.txt multinode-261000:/home/docker/cp-test_multinode-261000-m03_multinode-261000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 ssh -n multinode-261000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 ssh -n multinode-261000 "sudo cat /home/docker/cp-test_multinode-261000-m03_multinode-261000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 cp multinode-261000-m03:/home/docker/cp-test.txt multinode-261000-m02:/home/docker/cp-test_multinode-261000-m03_multinode-261000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 ssh -n multinode-261000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 ssh -n multinode-261000-m02 "sudo cat /home/docker/cp-test_multinode-261000-m03_multinode-261000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-darwin-amd64 -p multinode-261000 node stop m03: (2.21943217s)
multinode_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-261000 status: exit status 7 (250.092008ms)

                                                
                                                
-- stdout --
	multinode-261000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-261000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-261000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-261000 status --alsologtostderr: exit status 7 (253.167037ms)

                                                
                                                
-- stdout --
	multinode-261000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-261000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-261000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 10:24:50.642517    3540 out.go:296] Setting OutFile to fd 1 ...
	I1208 10:24:50.642753    3540 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 10:24:50.642759    3540 out.go:309] Setting ErrFile to fd 2...
	I1208 10:24:50.642763    3540 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 10:24:50.642937    3540 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17738-1113/.minikube/bin
	I1208 10:24:50.643114    3540 out.go:303] Setting JSON to false
	I1208 10:24:50.643135    3540 mustload.go:65] Loading cluster: multinode-261000
	I1208 10:24:50.643180    3540 notify.go:220] Checking for updates...
	I1208 10:24:50.643450    3540 config.go:182] Loaded profile config "multinode-261000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1208 10:24:50.643462    3540 status.go:255] checking status of multinode-261000 ...
	I1208 10:24:50.643911    3540 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:24:50.643948    3540 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1208 10:24:50.652076    3540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51289
	I1208 10:24:50.652437    3540 main.go:141] libmachine: () Calling .GetVersion
	I1208 10:24:50.652867    3540 main.go:141] libmachine: Using API Version  1
	I1208 10:24:50.652876    3540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1208 10:24:50.653122    3540 main.go:141] libmachine: () Calling .GetMachineName
	I1208 10:24:50.653241    3540 main.go:141] libmachine: (multinode-261000) Calling .GetState
	I1208 10:24:50.653328    3540 main.go:141] libmachine: (multinode-261000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:24:50.653388    3540 main.go:141] libmachine: (multinode-261000) DBG | hyperkit pid from json: 3200
	I1208 10:24:50.654588    3540 status.go:330] multinode-261000 host status = "Running" (err=<nil>)
	I1208 10:24:50.654602    3540 host.go:66] Checking if "multinode-261000" exists ...
	I1208 10:24:50.654838    3540 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:24:50.654871    3540 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1208 10:24:50.662657    3540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51291
	I1208 10:24:50.662993    3540 main.go:141] libmachine: () Calling .GetVersion
	I1208 10:24:50.663302    3540 main.go:141] libmachine: Using API Version  1
	I1208 10:24:50.663321    3540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1208 10:24:50.663560    3540 main.go:141] libmachine: () Calling .GetMachineName
	I1208 10:24:50.663679    3540 main.go:141] libmachine: (multinode-261000) Calling .GetIP
	I1208 10:24:50.663763    3540 host.go:66] Checking if "multinode-261000" exists ...
	I1208 10:24:50.663995    3540 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:24:50.664023    3540 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1208 10:24:50.673110    3540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51293
	I1208 10:24:50.673464    3540 main.go:141] libmachine: () Calling .GetVersion
	I1208 10:24:50.673791    3540 main.go:141] libmachine: Using API Version  1
	I1208 10:24:50.673807    3540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1208 10:24:50.673989    3540 main.go:141] libmachine: () Calling .GetMachineName
	I1208 10:24:50.674077    3540 main.go:141] libmachine: (multinode-261000) Calling .DriverName
	I1208 10:24:50.674207    3540 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 10:24:50.674226    3540 main.go:141] libmachine: (multinode-261000) Calling .GetSSHHostname
	I1208 10:24:50.674296    3540 main.go:141] libmachine: (multinode-261000) Calling .GetSSHPort
	I1208 10:24:50.674377    3540 main.go:141] libmachine: (multinode-261000) Calling .GetSSHKeyPath
	I1208 10:24:50.674457    3540 main.go:141] libmachine: (multinode-261000) Calling .GetSSHUsername
	I1208 10:24:50.674555    3540 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/multinode-261000/id_rsa Username:docker}
	I1208 10:24:50.709832    3540 ssh_runner.go:195] Run: systemctl --version
	I1208 10:24:50.713508    3540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 10:24:50.723213    3540 kubeconfig.go:92] found "multinode-261000" server: "https://192.169.0.13:8443"
	I1208 10:24:50.723234    3540 api_server.go:166] Checking apiserver status ...
	I1208 10:24:50.723275    3540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 10:24:50.734256    3540 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1942/cgroup
	I1208 10:24:50.744122    3540 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/podf32eb8f99db8cd6a7b53726a83c35b95/7ba8b7e6682f5d552d85340e048c3f366acdeb969ab6daea21bd125b08315075"
	I1208 10:24:50.744196    3540 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podf32eb8f99db8cd6a7b53726a83c35b95/7ba8b7e6682f5d552d85340e048c3f366acdeb969ab6daea21bd125b08315075/freezer.state
	I1208 10:24:50.750388    3540 api_server.go:204] freezer state: "THAWED"
	I1208 10:24:50.750403    3540 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I1208 10:24:50.753686    3540 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I1208 10:24:50.753701    3540 status.go:421] multinode-261000 apiserver status = Running (err=<nil>)
	I1208 10:24:50.753714    3540 status.go:257] multinode-261000 status: &{Name:multinode-261000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 10:24:50.753729    3540 status.go:255] checking status of multinode-261000-m02 ...
	I1208 10:24:50.753970    3540 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:24:50.753996    3540 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1208 10:24:50.761838    3540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51297
	I1208 10:24:50.762188    3540 main.go:141] libmachine: () Calling .GetVersion
	I1208 10:24:50.762564    3540 main.go:141] libmachine: Using API Version  1
	I1208 10:24:50.762582    3540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1208 10:24:50.762800    3540 main.go:141] libmachine: () Calling .GetMachineName
	I1208 10:24:50.762909    3540 main.go:141] libmachine: (multinode-261000-m02) Calling .GetState
	I1208 10:24:50.762985    3540 main.go:141] libmachine: (multinode-261000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:24:50.763049    3540 main.go:141] libmachine: (multinode-261000-m02) DBG | hyperkit pid from json: 3228
	I1208 10:24:50.764233    3540 status.go:330] multinode-261000-m02 host status = "Running" (err=<nil>)
	I1208 10:24:50.764240    3540 host.go:66] Checking if "multinode-261000-m02" exists ...
	I1208 10:24:50.764476    3540 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:24:50.764500    3540 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1208 10:24:50.772387    3540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51299
	I1208 10:24:50.772715    3540 main.go:141] libmachine: () Calling .GetVersion
	I1208 10:24:50.773051    3540 main.go:141] libmachine: Using API Version  1
	I1208 10:24:50.773065    3540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1208 10:24:50.773245    3540 main.go:141] libmachine: () Calling .GetMachineName
	I1208 10:24:50.773349    3540 main.go:141] libmachine: (multinode-261000-m02) Calling .GetIP
	I1208 10:24:50.773429    3540 host.go:66] Checking if "multinode-261000-m02" exists ...
	I1208 10:24:50.773686    3540 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:24:50.773709    3540 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1208 10:24:50.781609    3540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51301
	I1208 10:24:50.781939    3540 main.go:141] libmachine: () Calling .GetVersion
	I1208 10:24:50.782281    3540 main.go:141] libmachine: Using API Version  1
	I1208 10:24:50.782295    3540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1208 10:24:50.782521    3540 main.go:141] libmachine: () Calling .GetMachineName
	I1208 10:24:50.782624    3540 main.go:141] libmachine: (multinode-261000-m02) Calling .DriverName
	I1208 10:24:50.782760    3540 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 10:24:50.782772    3540 main.go:141] libmachine: (multinode-261000-m02) Calling .GetSSHHostname
	I1208 10:24:50.782861    3540 main.go:141] libmachine: (multinode-261000-m02) Calling .GetSSHPort
	I1208 10:24:50.782948    3540 main.go:141] libmachine: (multinode-261000-m02) Calling .GetSSHKeyPath
	I1208 10:24:50.783024    3540 main.go:141] libmachine: (multinode-261000-m02) Calling .GetSSHUsername
	I1208 10:24:50.783107    3540 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17738-1113/.minikube/machines/multinode-261000-m02/id_rsa Username:docker}
	I1208 10:24:50.819973    3540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 10:24:50.829219    3540 status.go:257] multinode-261000-m02 status: &{Name:multinode-261000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1208 10:24:50.829237    3540 status.go:255] checking status of multinode-261000-m03 ...
	I1208 10:24:50.829483    3540 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:24:50.829508    3540 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1208 10:24:50.837802    3540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51304
	I1208 10:24:50.838184    3540 main.go:141] libmachine: () Calling .GetVersion
	I1208 10:24:50.838554    3540 main.go:141] libmachine: Using API Version  1
	I1208 10:24:50.838567    3540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1208 10:24:50.838764    3540 main.go:141] libmachine: () Calling .GetMachineName
	I1208 10:24:50.838861    3540 main.go:141] libmachine: (multinode-261000-m03) Calling .GetState
	I1208 10:24:50.838942    3540 main.go:141] libmachine: (multinode-261000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:24:50.839012    3540 main.go:141] libmachine: (multinode-261000-m03) DBG | hyperkit pid from json: 3322
	I1208 10:24:50.840188    3540 main.go:141] libmachine: (multinode-261000-m03) DBG | hyperkit pid 3322 missing from process table
	I1208 10:24:50.840221    3540 status.go:330] multinode-261000-m03 host status = "Stopped" (err=<nil>)
	I1208 10:24:50.840229    3540 status.go:343] host is not running, skipping remaining checks
	I1208 10:24:50.840241    3540 status.go:257] multinode-261000-m03 status: &{Name:multinode-261000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.72s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (27.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-261000 node start m03 --alsologtostderr: (26.812441623s)
multinode_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (27.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (161.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-261000
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-261000
E1208 10:25:28.500533    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-261000: (18.421948187s)
multinode_test.go:323: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-261000 --wait=true -v=8 --alsologtostderr
E1208 10:27:44.652110    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-261000 --wait=true -v=8 --alsologtostderr: (2m23.170151213s)
multinode_test.go:328: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-261000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (161.70s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-darwin-amd64 -p multinode-261000 node delete m03: (2.628591585s)
multinode_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 stop
E1208 10:28:12.341737    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-darwin-amd64 -p multinode-261000 stop: (16.315766251s)
multinode_test.go:348: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-261000 status: exit status 7 (76.806553ms)

                                                
                                                
-- stdout --
	multinode-261000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-261000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-261000 status --alsologtostderr: exit status 7 (77.364239ms)

                                                
                                                
-- stdout --
	multinode-261000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-261000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 10:28:19.125217    3695 out.go:296] Setting OutFile to fd 1 ...
	I1208 10:28:19.125447    3695 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 10:28:19.125454    3695 out.go:309] Setting ErrFile to fd 2...
	I1208 10:28:19.125458    3695 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 10:28:19.125636    3695 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17738-1113/.minikube/bin
	I1208 10:28:19.125822    3695 out.go:303] Setting JSON to false
	I1208 10:28:19.125844    3695 mustload.go:65] Loading cluster: multinode-261000
	I1208 10:28:19.125886    3695 notify.go:220] Checking for updates...
	I1208 10:28:19.126153    3695 config.go:182] Loaded profile config "multinode-261000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1208 10:28:19.126166    3695 status.go:255] checking status of multinode-261000 ...
	I1208 10:28:19.126583    3695 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:28:19.126624    3695 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1208 10:28:19.134793    3695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51485
	I1208 10:28:19.135115    3695 main.go:141] libmachine: () Calling .GetVersion
	I1208 10:28:19.135533    3695 main.go:141] libmachine: Using API Version  1
	I1208 10:28:19.135544    3695 main.go:141] libmachine: () Calling .SetConfigRaw
	I1208 10:28:19.135786    3695 main.go:141] libmachine: () Calling .GetMachineName
	I1208 10:28:19.135895    3695 main.go:141] libmachine: (multinode-261000) Calling .GetState
	I1208 10:28:19.135994    3695 main.go:141] libmachine: (multinode-261000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:28:19.136054    3695 main.go:141] libmachine: (multinode-261000) DBG | hyperkit pid from json: 3608
	I1208 10:28:19.136989    3695 main.go:141] libmachine: (multinode-261000) DBG | hyperkit pid 3608 missing from process table
	I1208 10:28:19.137010    3695 status.go:330] multinode-261000 host status = "Stopped" (err=<nil>)
	I1208 10:28:19.137017    3695 status.go:343] host is not running, skipping remaining checks
	I1208 10:28:19.137023    3695 status.go:257] multinode-261000 status: &{Name:multinode-261000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 10:28:19.137043    3695 status.go:255] checking status of multinode-261000-m02 ...
	I1208 10:28:19.137273    3695 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1208 10:28:19.137295    3695 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1208 10:28:19.145006    3695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51487
	I1208 10:28:19.145341    3695 main.go:141] libmachine: () Calling .GetVersion
	I1208 10:28:19.145721    3695 main.go:141] libmachine: Using API Version  1
	I1208 10:28:19.145746    3695 main.go:141] libmachine: () Calling .SetConfigRaw
	I1208 10:28:19.145937    3695 main.go:141] libmachine: () Calling .GetMachineName
	I1208 10:28:19.146043    3695 main.go:141] libmachine: (multinode-261000-m02) Calling .GetState
	I1208 10:28:19.146125    3695 main.go:141] libmachine: (multinode-261000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1208 10:28:19.146191    3695 main.go:141] libmachine: (multinode-261000-m02) DBG | hyperkit pid from json: 3627
	I1208 10:28:19.147084    3695 main.go:141] libmachine: (multinode-261000-m02) DBG | hyperkit pid 3627 missing from process table
	I1208 10:28:19.147121    3695 status.go:330] multinode-261000-m02 host status = "Stopped" (err=<nil>)
	I1208 10:28:19.147131    3695 status.go:343] host is not running, skipping remaining checks
	I1208 10:28:19.147141    3695 status.go:257] multinode-261000-m02 status: &{Name:multinode-261000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (109.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-261000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E1208 10:29:15.032717    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-261000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (1m49.576733138s)
multinode_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-261000 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (109.91s)

                                                
                                    
x
+
TestPreload (150.73s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-509000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E1208 10:34:15.033402    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-509000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m16.718339521s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-509000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-509000 image pull gcr.io/k8s-minikube/busybox: (1.263943957s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-509000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-509000: (8.242844135s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-509000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
E1208 10:35:38.092725    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-509000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (59.079244284s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-509000 image list
helpers_test.go:175: Cleaning up "test-preload-509000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-509000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-509000: (5.261899867s)
--- PASS: TestPreload (150.73s)

                                                
                                    
x
+
TestScheduledStopUnix (105.56s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-355000 --memory=2048 --driver=hyperkit 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-355000 --memory=2048 --driver=hyperkit : (34.099339323s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-355000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-355000 -n scheduled-stop-355000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-355000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-355000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-355000 -n scheduled-stop-355000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-355000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-355000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-355000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-355000: exit status 7 (73.315569ms)

                                                
                                                
-- stdout --
	scheduled-stop-355000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-355000 -n scheduled-stop-355000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-355000 -n scheduled-stop-355000: exit status 7 (67.281702ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-355000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-355000
--- PASS: TestScheduledStopUnix (105.56s)

                                                
                                    
x
+
TestSkaffold (108.99s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe4292341949 version
skaffold_test.go:63: skaffold version: v2.9.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-097000 --memory=2600 --driver=hyperkit 
E1208 10:37:44.654074    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-097000 --memory=2600 --driver=hyperkit : (35.292064596s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe4292341949 run --minikube-profile skaffold-097000 --kube-context skaffold-097000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe4292341949 run --minikube-profile skaffold-097000 --kube-context skaffold-097000 --status-check=true --port-forward=false --interactive=false: (55.234593417s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6799dfb449-6xdcz" [7ed3d0bd-100b-48bc-9cae-e85d7d97e00a] Running
E1208 10:39:07.717722    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.017161608s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6f7d8665c4-26zh7" [2e8035a1-b4bf-4888-9cd3-ced0a9ce4e7d] Running
E1208 10:39:15.047162    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006913419s
helpers_test.go:175: Cleaning up "skaffold-097000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-097000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-097000: (5.268843287s)
--- PASS: TestSkaffold (108.99s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (171.96s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.2258517606.exe start -p running-upgrade-625000 --memory=2200 --vm-driver=hyperkit 
E1208 10:42:44.664055    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.2258517606.exe start -p running-upgrade-625000 --memory=2200 --vm-driver=hyperkit : (1m30.318177391s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-625000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E1208 10:44:05.204558    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/skaffold-097000/client.crt: no such file or directory
E1208 10:44:05.210172    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/skaffold-097000/client.crt: no such file or directory
E1208 10:44:05.221495    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/skaffold-097000/client.crt: no such file or directory
E1208 10:44:05.242803    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/skaffold-097000/client.crt: no such file or directory
E1208 10:44:05.284307    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/skaffold-097000/client.crt: no such file or directory
E1208 10:44:05.365098    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/skaffold-097000/client.crt: no such file or directory
E1208 10:44:05.526555    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/skaffold-097000/client.crt: no such file or directory
E1208 10:44:05.847038    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/skaffold-097000/client.crt: no such file or directory
E1208 10:44:06.488825    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/skaffold-097000/client.crt: no such file or directory
E1208 10:44:07.769071    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/skaffold-097000/client.crt: no such file or directory
E1208 10:44:10.330094    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/skaffold-097000/client.crt: no such file or directory
E1208 10:44:15.044312    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
E1208 10:44:15.450508    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/skaffold-097000/client.crt: no such file or directory
E1208 10:44:25.690555    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/skaffold-097000/client.crt: no such file or directory
version_upgrade_test.go:143: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-625000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m15.234662141s)
helpers_test.go:175: Cleaning up "running-upgrade-625000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-625000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-625000: (5.318170163s)
--- PASS: TestRunningBinaryUpgrade (171.96s)

                                                
                                    
x
+
TestKubernetesUpgrade (147.24s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-133000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:235: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-133000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit : (1m12.134436953s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-133000
version_upgrade_test.go:240: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-133000: (8.243054783s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-133000 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-133000 status --format={{.Host}}: exit status 7 (68.343602ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-133000 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-133000 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=hyperkit : (32.564872534s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-133000 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-133000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-133000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit : exit status 106 (411.280009ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-133000] minikube v1.32.0 on Darwin 14.1.2
	  - MINIKUBE_LOCATION=17738
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17738-1113/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17738-1113/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-133000
	    minikube start -p kubernetes-upgrade-133000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1330002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-133000 --kubernetes-version=v1.29.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-133000 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:288: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-133000 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=hyperkit : (30.317544301s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-133000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-133000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-133000: (3.455719518s)
--- PASS: TestKubernetesUpgrade (147.24s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.52s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=17738
- KUBECONFIG=/Users/jenkins/minikube-integration/17738-1113/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1224357864/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1224357864/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1224357864/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1224357864/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.52s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.86s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=17738
- KUBECONFIG=/Users/jenkins/minikube-integration/17738-1113/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3066364955/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3066364955/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3066364955/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3066364955/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-200000
version_upgrade_test.go:219: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-200000: (3.366965618s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.37s)

                                                
                                    
x
+
TestPause/serial/Start (49.39s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-396000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-396000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : (49.386017626s)
--- PASS: TestPause/serial/Start (49.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-046000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-046000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (517.185309ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-046000] minikube v1.32.0 on Darwin 14.1.2
	  - MINIKUBE_LOCATION=17738
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17738-1113/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17738-1113/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-046000 --driver=hyperkit 
E1208 10:47:44.659998    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-046000 --driver=hyperkit : (37.050472615s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-046000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.22s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (38.46s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-396000 --alsologtostderr -v=1 --driver=hyperkit 
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-396000 --alsologtostderr -v=1 --driver=hyperkit : (38.44331915s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (38.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-046000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-046000 --no-kubernetes --driver=hyperkit : (13.979217303s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-046000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-046000 status -o json: exit status 2 (148.402596ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-046000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-046000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-046000: (2.464829019s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (18.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-046000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-046000 --no-kubernetes --driver=hyperkit : (18.083522875s)
--- PASS: TestNoKubernetes/serial/Start (18.08s)

                                                
                                    
x
+
TestPause/serial/Pause (0.54s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-396000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.54s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.16s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-396000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-396000 --output=json --layout=cluster: exit status 2 (161.447312ms)

                                                
                                                
-- stdout --
	{"Name":"pause-396000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-396000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.16s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.57s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-396000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.57s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.57s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-396000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.57s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.27s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-396000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-396000 --alsologtostderr -v=5: (5.274790815s)
--- PASS: TestPause/serial/DeletePaused (5.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-046000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-046000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (128.876832ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (29.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (29.583604254s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (29.82s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.21s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (48.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
E1208 10:49:05.202197    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/skaffold-097000/client.crt: no such file or directory
E1208 10:49:15.039821    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : (48.978255844s)
--- PASS: TestNetworkPlugins/group/auto/Start (48.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-046000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-046000: (2.254081132s)
--- PASS: TestNoKubernetes/serial/Stop (2.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (17.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-046000 --driver=hyperkit 
E1208 10:49:32.889611    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/skaffold-097000/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-046000 --driver=hyperkit : (17.146070931s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (17.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-046000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-046000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (127.159352ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (58.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (58.979210528s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (58.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-387000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-387000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vn84r" [abc4a775-1159-4b7f-9a88-07d56b81f39d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vn84r" [abc4a775-1159-4b7f-9a88-07d56b81f39d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.00911751s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-387000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-gjlkp" [1f70dd59-f4bf-4d79-bb07-1e94aa795382] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.014323022s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (58.757756287s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-387000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-387000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qpnb2" [5f69a7f3-3ef1-4a24-b7f6-cab2f6a3f7dd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qpnb2" [5f69a7f3-3ef1-4a24-b7f6-cab2f6a3f7dd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.013646009s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-387000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (59.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : (59.744509482s)
--- PASS: TestNetworkPlugins/group/false/Start (59.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-387000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-387000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lkdvs" [3f9465ec-88c6-4ad2-ba1b-49dec3c6b774] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lkdvs" [3f9465ec-88c6-4ad2-ba1b-49dec3c6b774] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.006871539s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-387000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (53.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : (53.247025409s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (53.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-387000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-387000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qmtx2" [95bf4964-45df-4470-ba70-c04fdb637796] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qmtx2" [95bf4964-45df-4470-ba70-c04fdb637796] Running
E1208 10:52:18.097668    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.006386536s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-387000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
E1208 10:52:44.657618    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (58.449950191s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-387000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-387000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vcc77" [546bfece-a530-4277-a8dc-d45514610392] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vcc77" [546bfece-a530-4277-a8dc-d45514610392] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.007407791s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-387000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (50.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (50.731798453s)
--- PASS: TestNetworkPlugins/group/bridge/Start (50.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-9mj2c" [11870155-5466-4806-b116-5b8602ede8d8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.012391701s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-387000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-387000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dwvql" [fa5330a1-9d40-452c-97d6-4e095099024d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dwvql" [fa5330a1-9d40-452c-97d6-4e095099024d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.007787676s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-387000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (49.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (49.275919472s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (49.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-387000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-387000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-z6ck4" [67641ced-dc14-4597-8750-81e2039cc035] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-z6ck4" [67641ced-dc14-4597-8750-81e2039cc035] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.008605575s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-387000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (139.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-684000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
E1208 10:54:50.882010    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/auto-387000/client.crt: no such file or directory
E1208 10:55:01.122115    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/auto-387000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-684000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (2m19.753222817s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (139.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-387000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-387000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dv8v9" [e130d3a6-3f45-45a1-9e95-90dcce77c7b2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dv8v9" [e130d3a6-3f45-45a1-9e95-90dcce77c7b2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.010083348s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (32.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-387000 exec deployment/netcat -- nslookup kubernetes.default
E1208 10:55:21.602624    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/auto-387000/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context kubenet-387000 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.10900887s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context kubenet-387000 exec deployment/netcat -- nslookup kubernetes.default
E1208 10:55:38.044670    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
E1208 10:55:38.049868    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
E1208 10:55:38.061439    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
E1208 10:55:38.081639    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
E1208 10:55:38.123287    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
E1208 10:55:38.203827    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
E1208 10:55:38.365241    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
E1208 10:55:38.686921    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
E1208 10:55:39.328498    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
E1208 10:55:40.609144    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
E1208 10:55:43.170846    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context kubenet-387000 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146574212s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1208 10:55:47.706212    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
E1208 10:55:48.291864    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
net_test.go:175: (dbg) Run:  kubectl --context kubenet-387000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (32.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)
E1208 11:11:40.662269    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (87.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-698000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.29.0-rc.1
E1208 10:56:19.012779    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
E1208 10:56:40.599712    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
E1208 10:56:40.604865    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
E1208 10:56:40.615609    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
E1208 10:56:40.635780    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
E1208 10:56:40.676554    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
E1208 10:56:40.757520    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
E1208 10:56:40.918009    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
E1208 10:56:41.238970    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
E1208 10:56:41.880027    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
E1208 10:56:43.160436    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
E1208 10:56:45.720886    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
E1208 10:56:50.841532    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
E1208 10:56:59.972716    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
E1208 10:57:01.082136    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-698000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.29.0-rc.1: (1m27.416113362s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (87.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-684000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7541a58e-428c-4a09-a546-3cc2ad263328] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1208 10:57:12.484980    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/false-387000/client.crt: no such file or directory
E1208 10:57:12.490078    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/false-387000/client.crt: no such file or directory
E1208 10:57:12.500170    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/false-387000/client.crt: no such file or directory
E1208 10:57:12.521111    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/false-387000/client.crt: no such file or directory
E1208 10:57:12.561250    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/false-387000/client.crt: no such file or directory
E1208 10:57:12.641627    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/false-387000/client.crt: no such file or directory
E1208 10:57:12.802877    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/false-387000/client.crt: no such file or directory
E1208 10:57:13.123556    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/false-387000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [7541a58e-428c-4a09-a546-3cc2ad263328] Running
E1208 10:57:13.764703    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/false-387000/client.crt: no such file or directory
E1208 10:57:15.045530    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/false-387000/client.crt: no such file or directory
E1208 10:57:17.606798    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/false-387000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.021593742s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-684000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-684000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-684000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-684000 --alsologtostderr -v=3
E1208 10:57:21.562678    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
E1208 10:57:22.727875    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/false-387000/client.crt: no such file or directory
E1208 10:57:24.483773    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/auto-387000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-684000 --alsologtostderr -v=3: (8.275113002s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-684000 -n old-k8s-version-684000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-684000 -n old-k8s-version-684000: exit status 7 (67.09474ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-684000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (490.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-684000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
E1208 10:57:32.968301    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/false-387000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-684000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (8m10.25480712s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-684000 -n old-k8s-version-684000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (490.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-698000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d325f0c8-80a8-4453-a12a-662ffa0413ef] Pending
helpers_test.go:344: "busybox" [d325f0c8-80a8-4453-a12a-662ffa0413ef] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d325f0c8-80a8-4453-a12a-662ffa0413ef] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.017440264s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-698000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-698000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1208 10:57:44.653771    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-698000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-698000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-698000 --alsologtostderr -v=3: (8.288269865s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-698000 -n no-preload-698000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-698000 -n no-preload-698000: exit status 7 (67.323502ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-698000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (297.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-698000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.29.0-rc.1
E1208 10:57:53.449509    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/false-387000/client.crt: no such file or directory
E1208 10:58:02.523202    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
E1208 10:58:02.578227    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E1208 10:58:02.584569    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E1208 10:58:02.595874    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E1208 10:58:02.617747    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E1208 10:58:02.658077    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E1208 10:58:02.739184    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E1208 10:58:02.900923    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E1208 10:58:03.221622    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E1208 10:58:03.863850    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E1208 10:58:05.146111    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E1208 10:58:07.707528    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E1208 10:58:12.829151    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E1208 10:58:21.891995    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
E1208 10:58:23.069418    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E1208 10:58:34.410533    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/false-387000/client.crt: no such file or directory
E1208 10:58:39.697830    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E1208 10:58:39.703569    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E1208 10:58:39.713788    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E1208 10:58:39.734501    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E1208 10:58:39.775291    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E1208 10:58:39.855411    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E1208 10:58:40.016304    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E1208 10:58:40.336843    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E1208 10:58:40.978483    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E1208 10:58:42.259716    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E1208 10:58:43.549342    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E1208 10:58:44.820040    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E1208 10:58:49.940133    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E1208 10:59:00.180209    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E1208 10:59:05.193264    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/skaffold-097000/client.crt: no such file or directory
E1208 10:59:15.032055    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
E1208 10:59:20.660098    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E1208 10:59:21.207300    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/bridge-387000/client.crt: no such file or directory
E1208 10:59:21.213135    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/bridge-387000/client.crt: no such file or directory
E1208 10:59:21.224425    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/bridge-387000/client.crt: no such file or directory
E1208 10:59:21.245078    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/bridge-387000/client.crt: no such file or directory
E1208 10:59:21.285374    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/bridge-387000/client.crt: no such file or directory
E1208 10:59:21.367510    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/bridge-387000/client.crt: no such file or directory
E1208 10:59:21.527659    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/bridge-387000/client.crt: no such file or directory
E1208 10:59:21.848250    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/bridge-387000/client.crt: no such file or directory
E1208 10:59:22.488390    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/bridge-387000/client.crt: no such file or directory
E1208 10:59:23.768473    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/bridge-387000/client.crt: no such file or directory
E1208 10:59:24.442485    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
E1208 10:59:24.510519    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E1208 10:59:26.329179    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/bridge-387000/client.crt: no such file or directory
E1208 10:59:31.449482    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/bridge-387000/client.crt: no such file or directory
E1208 10:59:40.629815    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/auto-387000/client.crt: no such file or directory
E1208 10:59:41.690688    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/bridge-387000/client.crt: no such file or directory
E1208 10:59:56.330786    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/false-387000/client.crt: no such file or directory
E1208 11:00:01.620254    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E1208 11:00:02.171074    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/bridge-387000/client.crt: no such file or directory
E1208 11:00:05.206876    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kubenet-387000/client.crt: no such file or directory
E1208 11:00:05.212249    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kubenet-387000/client.crt: no such file or directory
E1208 11:00:05.223702    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kubenet-387000/client.crt: no such file or directory
E1208 11:00:05.244525    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kubenet-387000/client.crt: no such file or directory
E1208 11:00:05.286743    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kubenet-387000/client.crt: no such file or directory
E1208 11:00:05.368518    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kubenet-387000/client.crt: no such file or directory
E1208 11:00:05.530079    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kubenet-387000/client.crt: no such file or directory
E1208 11:00:05.850296    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kubenet-387000/client.crt: no such file or directory
E1208 11:00:06.491460    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kubenet-387000/client.crt: no such file or directory
E1208 11:00:07.771776    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kubenet-387000/client.crt: no such file or directory
E1208 11:00:08.322577    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/auto-387000/client.crt: no such file or directory
E1208 11:00:10.333958    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kubenet-387000/client.crt: no such file or directory
E1208 11:00:15.454871    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kubenet-387000/client.crt: no such file or directory
E1208 11:00:25.696633    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kubenet-387000/client.crt: no such file or directory
E1208 11:00:28.242747    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/skaffold-097000/client.crt: no such file or directory
E1208 11:00:38.041544    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
E1208 11:00:43.132250    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/bridge-387000/client.crt: no such file or directory
E1208 11:00:46.177354    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kubenet-387000/client.crt: no such file or directory
E1208 11:00:46.430560    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E1208 11:01:05.732081    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
E1208 11:01:23.540151    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E1208 11:01:27.137796    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kubenet-387000/client.crt: no such file or directory
E1208 11:01:40.597657    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
E1208 11:02:05.052839    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/bridge-387000/client.crt: no such file or directory
E1208 11:02:08.280977    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
E1208 11:02:12.481750    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/false-387000/client.crt: no such file or directory
E1208 11:02:40.168882    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/false-387000/client.crt: no such file or directory
E1208 11:02:44.648946    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
E1208 11:02:49.057030    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kubenet-387000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-698000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.29.0-rc.1: (4m57.663875229s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-698000 -n no-preload-698000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (297.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tn9k5" [6a7cb308-c88f-4d3c-a393-b4112656d654] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013654853s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tn9k5" [6a7cb308-c88f-4d3c-a393-b4112656d654] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009069524s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-698000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-698000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (1.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-698000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-698000 -n no-preload-698000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-698000 -n no-preload-698000: exit status 2 (161.025329ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-698000 -n no-preload-698000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-698000 -n no-preload-698000: exit status 2 (160.204527ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-698000 --alsologtostderr -v=1
E1208 11:03:02.574344    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-698000 -n no-preload-698000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-698000 -n no-preload-698000
--- PASS: TestStartStop/group/no-preload/serial/Pause (1.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (60.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-117000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.4
E1208 11:03:30.269794    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E1208 11:03:39.693672    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E1208 11:04:05.189510    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/skaffold-097000/client.crt: no such file or directory
E1208 11:04:07.379006    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/flannel-387000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-117000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.4: (1m0.680551478s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (60.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-117000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cd5ae578-da9b-4f63-808d-2ac9799dcf92] Pending
helpers_test.go:344: "busybox" [cd5ae578-da9b-4f63-808d-2ac9799dcf92] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cd5ae578-da9b-4f63-808d-2ac9799dcf92] Running
E1208 11:04:15.027928    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.017029365s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-117000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-117000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-117000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-117000 --alsologtostderr -v=3
E1208 11:04:21.202786    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/bridge-387000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-117000 --alsologtostderr -v=3: (8.251235469s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-117000 -n embed-certs-117000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-117000 -n embed-certs-117000: exit status 7 (68.095658ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-117000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (300.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-117000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.4
E1208 11:04:40.626152    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/auto-387000/client.crt: no such file or directory
E1208 11:04:48.893128    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/bridge-387000/client.crt: no such file or directory
E1208 11:05:05.202073    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kubenet-387000/client.crt: no such file or directory
E1208 11:05:32.896879    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kubenet-387000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-117000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.4: (4m59.9940916s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-117000 -n embed-certs-117000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (300.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-9rnfc" [fc19e680-f0e3-4d30-9136-962c1db35e5e] Running
E1208 11:05:38.037023    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012470368s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-9rnfc" [fc19e680-f0e3-4d30-9136-962c1db35e5e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006154501s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-684000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-684000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (1.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-684000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-684000 -n old-k8s-version-684000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-684000 -n old-k8s-version-684000: exit status 2 (154.583077ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-684000 -n old-k8s-version-684000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-684000 -n old-k8s-version-684000: exit status 2 (154.595854ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-684000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-684000 -n old-k8s-version-684000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-684000 -n old-k8s-version-684000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (1.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-743000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.4
E1208 11:06:40.592216    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-743000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.4: (50.842533907s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-743000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [42fd7600-de64-4fd4-a234-a9dd5279d8f8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [42fd7600-de64-4fd4-a234-a9dd5279d8f8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.017676474s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-743000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-743000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-743000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-743000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-743000 --alsologtostderr -v=3: (8.261807587s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-743000 -n default-k8s-diff-port-743000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-743000 -n default-k8s-diff-port-743000: exit status 7 (67.3683ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-743000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (294.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-743000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.4
E1208 11:07:10.107316    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/old-k8s-version-684000/client.crt: no such file or directory
E1208 11:07:10.113141    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/old-k8s-version-684000/client.crt: no such file or directory
E1208 11:07:10.123520    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/old-k8s-version-684000/client.crt: no such file or directory
E1208 11:07:10.144004    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/old-k8s-version-684000/client.crt: no such file or directory
E1208 11:07:10.184226    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/old-k8s-version-684000/client.crt: no such file or directory
E1208 11:07:10.265285    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/old-k8s-version-684000/client.crt: no such file or directory
E1208 11:07:10.425760    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/old-k8s-version-684000/client.crt: no such file or directory
E1208 11:07:10.746072    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/old-k8s-version-684000/client.crt: no such file or directory
E1208 11:07:11.386968    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/old-k8s-version-684000/client.crt: no such file or directory
E1208 11:07:12.477438    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/false-387000/client.crt: no such file or directory
E1208 11:07:12.667790    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/old-k8s-version-684000/client.crt: no such file or directory
E1208 11:07:15.227931    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/old-k8s-version-684000/client.crt: no such file or directory
E1208 11:07:20.348341    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/old-k8s-version-684000/client.crt: no such file or directory
E1208 11:07:30.588378    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/old-k8s-version-684000/client.crt: no such file or directory
E1208 11:07:34.693699    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/no-preload-698000/client.crt: no such file or directory
E1208 11:07:34.698956    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/no-preload-698000/client.crt: no such file or directory
E1208 11:07:34.709467    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/no-preload-698000/client.crt: no such file or directory
E1208 11:07:34.729733    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/no-preload-698000/client.crt: no such file or directory
E1208 11:07:34.770749    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/no-preload-698000/client.crt: no such file or directory
E1208 11:07:34.851243    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/no-preload-698000/client.crt: no such file or directory
E1208 11:07:35.011361    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/no-preload-698000/client.crt: no such file or directory
E1208 11:07:35.331668    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/no-preload-698000/client.crt: no such file or directory
E1208 11:07:35.972982    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/no-preload-698000/client.crt: no such file or directory
E1208 11:07:37.253132    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/no-preload-698000/client.crt: no such file or directory
E1208 11:07:39.814812    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/no-preload-698000/client.crt: no such file or directory
E1208 11:07:44.645444    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/ingress-addon-legacy-251000/client.crt: no such file or directory
E1208 11:07:44.935416    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/no-preload-698000/client.crt: no such file or directory
E1208 11:07:51.069280    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/old-k8s-version-684000/client.crt: no such file or directory
E1208 11:07:55.175790    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/no-preload-698000/client.crt: no such file or directory
E1208 11:08:02.570858    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E1208 11:08:15.655760    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/no-preload-698000/client.crt: no such file or directory
E1208 11:08:32.102561    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/old-k8s-version-684000/client.crt: no such file or directory
E1208 11:08:39.761515    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E1208 11:08:56.687776    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/no-preload-698000/client.crt: no such file or directory
E1208 11:08:58.157632    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
E1208 11:09:05.258675    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/skaffold-097000/client.crt: no such file or directory
E1208 11:09:15.097098    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/functional-688000/client.crt: no such file or directory
E1208 11:09:21.271212    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/bridge-387000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-743000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.4: (4m54.067741301s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-743000 -n default-k8s-diff-port-743000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (294.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2xmn6" [da8176ec-35f7-4483-9bc3-8d10fd90c90b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013113729s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2xmn6" [da8176ec-35f7-4483-9bc3-8d10fd90c90b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007553541s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-117000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-117000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (1.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-117000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-117000 -n embed-certs-117000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-117000 -n embed-certs-117000: exit status 2 (160.336215ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-117000 -n embed-certs-117000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-117000 -n embed-certs-117000: exit status 2 (158.630349ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-117000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-117000 -n embed-certs-117000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-117000 -n embed-certs-117000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (1.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-683000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.29.0-rc.1
E1208 11:09:54.023041    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/old-k8s-version-684000/client.crt: no such file or directory
E1208 11:10:05.270712    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kubenet-387000/client.crt: no such file or directory
E1208 11:10:18.606893    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/no-preload-698000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-683000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.29.0-rc.1: (48.132287162s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-683000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-683000 --alsologtostderr -v=3
E1208 11:10:38.106451    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-683000 --alsologtostderr -v=3: (8.306436076s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-683000 -n newest-cni-683000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-683000 -n newest-cni-683000: exit status 7 (67.741796ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-683000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-683000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.29.0-rc.1
E1208 11:11:03.747451    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/auto-387000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-683000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.29.0-rc.1: (37.350019101s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-683000 -n newest-cni-683000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-683000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-683000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-683000 -n newest-cni-683000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-683000 -n newest-cni-683000: exit status 2 (156.913242ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-683000 -n newest-cni-683000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-683000 -n newest-cni-683000: exit status 2 (159.672416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-683000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-683000 -n newest-cni-683000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-683000 -n newest-cni-683000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-knblm" [73384635-d82d-47ef-9a6d-ac2bc93bebc3] Running
E1208 11:12:01.157307    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012533444s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-knblm" [73384635-d82d-47ef-9a6d-ac2bc93bebc3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008806482s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-743000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-diff-port-743000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (1.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-743000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-743000 -n default-k8s-diff-port-743000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-743000 -n default-k8s-diff-port-743000: exit status 2 (160.131089ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-743000 -n default-k8s-diff-port-743000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-743000 -n default-k8s-diff-port-743000: exit status 2 (160.745854ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-743000 --alsologtostderr -v=1
E1208 11:12:10.175206    1585 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17738-1113/.minikube/profiles/old-k8s-version-684000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-743000 -n default-k8s-diff-port-743000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-743000 -n default-k8s-diff-port-743000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (1.79s)

                                                
                                    

Test skip (21/310)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-387000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                
----------------------- debugLogs end: cilium-387000 [took: 5.493113652s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-387000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-387000
--- SKIP: TestNetworkPlugins/group/cilium (5.87s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-735000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-735000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.39s)

                                                
                                    
Copied to clipboard